The development of remote sensing in the last 40 years

This editorial has its origins in a keynote presentation entitled ‘The Evolution of the Development of Remote Sensing Technologies – the Last 40 years’ which I gave at the 9 International Conference and Exhibition on Geospatial and Remote Sensing (9 IGRSM 2018) in Kuala Lumpur 24–25 April 2018 ‘Geospatial Enablement’. The editorial is not intended to be a definitive history of remote sensing from the beginning up to the day of its submission for publication. Rather it represents a personal account to try to enable present-day practitioners of remote sensing to gain a slight appreciation of what went before the time when they were introduced to the subject. The fun in our group in the 1980s was being able to explore many possible new applications of remote sensing, some of which turned out to be successful and some of which turned out to be failures – for various reasons. At a first glance it may seem that the list of references is woefully inadequate. However this is not an encyclopaedic review of remote sensing as it now is, but an attempt to recall some of the history of how we got here. The references are only meant to document some of the things that are said. For other information we assume that readers will consult whatever search engine, Google, etc., that they commonly use. I chose 40 years because it seemed tome that 1978was a landmark year for remote sensing. In that year three very important new satellite systems were launched into space, the TIROS-N satellite with the AVHRR (Advanced Very High Resolution Radiometer) on board, the SEASAT satellite and the NIMBUS-7 satellite with the CZCS (Coastal Zone Colour Scanner) on board. In addition to all these, the third satellite in the Landsat programme (Landsat 3) was launched in March 1978. Of rather less importance, it was the year of my very first remote sensing project which involved attempting touseCZCSdata to studywater quality parameters;we learned the hardway about the difficulties involved in conducting field experiments on a rapidly changing INTERNATIONAL JOURNAL OF REMOTE SENSING 2018, VOL. 39, NO. 23, 8387–8427 https://doi.org/10.1080/01431161.2018.1550919


Introduction
This editorial has its origins in a keynote presentation entitled 'The Evolution of the Development of Remote Sensing Technologiesthe Last 40 years' which I gave at the 9 th International Conference and Exhibition on Geospatial and Remote Sensing (9 IGRSM 2018) in Kuala Lumpur 24-25 April 2018 'Geospatial Enablement'. The editorial is not intended to be a definitive history of remote sensing from the beginning up to the day of its submission for publication. Rather it represents a personal account to try to enable present-day practitioners of remote sensing to gain a slight appreciation of what went before the time when they were introduced to the subject. The fun in our group in the 1980s was being able to explore many possible new applications of remote sensing, some of which turned out to be successful and some of which turned out to be failuresfor various reasons. At a first glance it may seem that the list of references is woefully inadequate. However this is not an encyclopaedic review of remote sensing as it now is, but an attempt to recall some of the history of how we got here. The references are only meant to document some of the things that are said. For other information we assume that readers will consult whatever search engine, Google, etc., that they commonly use.
I chose 40 years because it seemed to me that 1978 was a landmark year for remote sensing. In that year three very important new satellite systems were launched into space, the TIROS-N satellite with the AVHRR (Advanced Very High Resolution Radiometer) on board, the SEASAT satellite and the NIMBUS-7 satellite with the CZCS (Coastal Zone Colour Scanner) on board. In addition to all these, the third satellite in the Landsat programme (Landsat 3) was launched in March 1978. Of rather less importance, it was the year of my very first remote sensing project which involved attempting to use CZCS data to study water quality parameters; we learned the hard way about the difficulties involved in conducting field experiments on a rapidly changing environmental system simultaneously with satellite overflights. 1978 was also just before the launch of the International Journal of Remote Sensing (IJRS) in 1980 and so the initial work on the start up of the IJRS was being done in 1978.
This editorial is therefore divided into three parts (a) Part 1 remote sensing before 1978, (b) Part 2 1978, the year of the launch of three very important polar-orbiting satellites and (c) Part 3 remote sensing since 1978.
Textbooks sometimes define remote sensing to mean the observation of, or gathering of information about, a target by a device separated from it by some distance. In practice it is usually taken to be more restricted than that. It is sometimes claimed that the expression 'remote sensing' was coined by geographers at the U.S. Office of Naval Research in the 1960s at about the time that the use of 'spy' satellites was beginning to move out of the military sphere and into the civilian sphere. Remote sensing is often regarded as being synonymous with the use of artificial satellites, but there is an ongoing history of air photos that preceded the satellites and goes right up to the recent development of UAVs (drones) which are likely to supersede satellites in some areas (Liao et al. 2018 We shall consider homing pigeons, aerial photographs, Sputnik, Landsat and polar orbiting and geostationary weather satellites. We shall not discuss model aircraft, although seen from today's viewpoint model aircraft can be regarded as the precursors of the important new development, namely drones or unmanned aerial vehicles (UAVs) as remote sensing platfoms.
Historians of remote sensing cite various examples of the earliest attempts at obtaining remote sensing images such as using cameras carried by passengers in the baskets of hot air balloons, pigeons carrying cameras or even by people carrying cameras up a hillside or up a tower, e.g. the Eiffel Tower in Paris. In 1907 Julius Neubronner developed a light miniature camera that could be fitted to a pigeon's breast with a harness (Figure 1). An exhibit based on the photograph in Figure 1 is to be seen in the Smithsonian Air and Space Museum in the USA, and there is an article on "Pigeon photography" in Wikipedia. To take an aerial photograph Neubronner carried a pigeon to a location nearly 100 km away from its home; it was fitted with a camera and then released and the bird would typically fly home on a direct route at a height of 50 m to 100 m. A pneumatic system controlled the time delay before a photograph was taken. This did in fact successfully produce images of the ground (Figure 2). Other platforms were also tried, balloons, kites, rockets and airships but none of them made much progress. Although there was some initial excitement over pigeon photography, other forms of aerial photography emerged, causing people to abandon the idea of the pigeon photographers. In some ways the pigeon camera was a precursor of the remote sensing UAV (unmanned aerial vehicle) or drone, which is one of the very latest systems to be introduced in remote sensing, as we shall see later on (section 3.7).
The first aerial photograph was claimed to have been taken in 1858 by Felix Tournachon, known as Nadar, from a tethered balloon over the Bièvre Valley in France. In the end it was (light) aircraft which really took off, so to speak, and aircraft were widely used in the used in the Great War in Europe of [1914][1915][1916][1917][1918]. Light aircraft could fly over the battlefields of France and Belgium for reconnaissance with little chance of being shot down. A zeppelin (airship) conducted a bombing raid on London in 1915 destroying a building in Farringdon Road, see Figure 3. Of course, aerial photography has been widely used in various wars since then.
Apart from being used as imagery, civilian aerial photographs have been widely used for a very long time in cartography (in photogrammetric surveys for map making, particularly as the basis for making topographic maps). Specialist large format cameras for fitting into survey aircraft (looking vertically down, assuming the aircraft was flying horizontally) were  developed. Such cameras were specially designed for taking near-vertical sequences of aerial exposures from an aircraft as it flies along. It would be bolted into a window in the floor of the fuselage of a light aircraft. The standard negative format is 230 mm × 230 mm.
It is one thing to take a photograph of the ground from an aircraft, but it is quite another thing to transfer the information/data to make or update a map at a given scale. Several pieces of information are required, including (a) the location of the aircraft when the photo was taken. (Recall that GPS is a relatively recent invention). (b) the orientation of the aircraft (roll, tilt and yaw) (and therefore the direction of observation of the camera, i.e. the orientation of the principal axis of the lens). (c) the scale of the photograph. This is related, of course, to the flying height, h, and the focal length, f, of the lens of the camera by and (d) If (a) and (b) are not known, one needs control points, more commonly known with satellite data as ground control points (gcps), i.e. a set of points for each of which both the location on the ground and the position on the map are known. Traditionally (a) and (b) were determined from control points. Survey flights are usually planned to involve flying in straight lines taking photographs at regular intervals, and often choosing the interval to give a 60% overlap between successive photographs along the flight line. When flying along parallel flight lines it is common to aim for a 20% overlap between the photographs from one flight line and the next. The overlap is important in that it enables contours to be added to a map using stereographic pairs of photographs.
The geometry involved in constructing a map from aerial photographs is straightforward in principle but complicated and tedious in practice. Probably the best book on the subject is by Cliff Burnside 'Mapping from aerial photogaphs' (Burnside 1979), which was first published in the year following our key date of 1978. This book runs to nearly 300 pages and it is packed full of descriptive text with numerous geometrical diagrams and masses of (simple) geometry. In the early days, i.e. pre-1978, some very complicated, and inevitably expensive, mechanical or optical equipment was designed and built and marketed commercially for applying all this geometry to the making of maps from sets of aerial photos.
In the old days map makers simply used differences of shading to indicate slopes and crags/cliffs on the terrain. We consider an example from Edinburgh, see Figure 4. Figure 4(a) shows a modern photograph of the centre of Edinburgh, Scotland, with north at the top. There is a ridge, topped by the main street, the Royal Mile, which runs from the castle in the west sloping gently down towards the east with the Palace of Holyroodhouse, HM the Queen's official residence in Scotland, about a mile (approximately 1.6 km) away and hardly visible near the right hand edge of the photograph. Figure  4(b) shows an old map, made before 1800. The castle is surrounded by very steep cliffs, except on its eastern side, and these cliffs are the reason why the castle was built there centuries ago as a defensive stronghold. The map maker has attempted to depict these cliffs by sketching them. The effect of the ridge is illustrated by the narrow streets running down on either side, almost directly north or directly south from the Royal Mile which runs along the top of the ridge. On a modern map the elevation would be indicated by contours, e.g. at 10 m intervals on a 1:50,000 scale map. The summit of the Castle Rock is 130 m above sea level, rising to a height of 80 m above the surrounding landscape, the cliffs themselves being rather less than half this height.
The geometry of determining the height of a point in a scene from a stereo pair of photographs is illustrated in Figure 5. Photogrammetry, the science of measuring photographs, became very important, principally for map making and map revision, but also for some other purposes too, for example producing scale drawings, plans and elevations, of historic buildings. National societies for photogrammetry were founded and in 1910 the International Society for Photogrammetry (ISP) was founded. After 70 years of functioning under its original name, the Society changed its name in 1980 to the International Society for Photogrammetry and Remote Sensing (ISPRS).
Some considerable feats of mechanical engineering were produced to enable contours to be plotted from stereopairs of air photos, i.e. pairs of photographs with the 60% overlap mentioned above, see Figure 6. This complicated and expensive opto-mechanical equipment has all been rendered obsolete by digital systems which were beginning to take over more or less round about our landmark date of 1978. All that equipment got scrapped, except for one or two examples that remain as museum pieces.

The first satellites
For a long time remote sensing was aerial photography and photogrammetry using analogue mechanical or optical equipment. That all changed with satellites and the space race. The space race refers to the 20th-century competition between the two Cold War rivals, the former Soviet Union (USSR) and the United States (USA), for dominance in spaceflight capability. It had its origins in the missile-based nuclear arms race between the two nations that occurred following World War II, aided by captured German missile technology and migrant German personnel. The technological superiority required for such dominance was seen as necessary for national security, and symbolic of ideological superiority. The space race began with the launch of Sputnik on 4 October 1957, see Figure 7. Sputnik was about the size of a (large) football, weighed 84 kg and had no camera on board. This was the beginning of the space race leading up to sending a manned mission to put a man on the Moon in July 1969, and bring him back, with some samples of Moon rock to provide research material for geologists. Between 1957 and 1978 the space race left a legacy of just over 20 years of rapid development of (a) communications satellites (a highly profitable commercial enterprise of some secondary relevance to remote sensing) and (b) various Earth-observing remote sensing satellites. It also left a continuing human space presence on the International Space Station, as well as sparking increases in spending on education and research and development, which led to beneficial spin-off technologies.
In the early days the satellite-related activities were almost entirely confined to the USA and USSR during this period. India built its first satellite, Aryabhata, which was launched in 1975 by the USSR. China was also active during this period, launching its first communications satellite on 24 April 1970.

Weather satellites
The first real success of remote sensing satellites in serious scientific work was in meteorology. Previously some photographs of cloud systems were taken from the ground and from aircraft flying above the clouds. But satellites provided the opportunity of viewing cloud systems from above and over large areas and so numerous images of cloud systems were generated. Such images are very familiar these days. They are used to allow weather forecasters to see weather systems developing in a way that was not possible before. They are valuable to presenters of weather forecasts on television or on other media. They are valuable in training meteorologists and they are valuable to research workers in meteorology studying historical weather events.
As early as 1946, the idea of cameras in orbit to observe the weather was being developed. This was due to sparse data observation coverage and the expense of using cloud cameras on rockets. By 1958, the early prototypes for the TIROS (Television and Infrared Observation Satellite) and Vanguard (developed by the (US) Army Signal Corps) were created. The first weather satellite, Vanguard 2, was launched on 17 February 1959. It was designed to measure cloud cover for the first 19 days in orbit, but a poor axis of rotation and its elliptical orbit kept it from collecting a notable amount of useful data. It was also planned to determine the drag on the satellite by studying the change in its orbit. This was planned to provide information on the density of the upper atmosphere for the lifetime of the spacecraft which was expected to be about 300 years.
The first weather satellite to be considered a success was TIROS-1, launched by NASA on 1 April 1960, see Figure 8. TIROS-1 operated for 78 days and proved to be much more successful than Vanguard 2. TIROS-1 was the first of a long series of polar-orbiting spacecraft which operated under various names of TIROS, ESSA (Environmental Science Services Administration), ITOS (Improved TIROS Operational Satellite) and NOAA (US) (National Oceanic and Atmospheric Administration) up to NOAA-5 which was launched in 1976. For the first 10 years or so these spacecraft carried vidicon cameras, i.e. television cameras, but around 1970 vidicon cameras were being replaced by scanning radiometers of increasing complexity as time went on. There is a parallel US military programme, the DMSP (Defense Meteorological Satellite Program) with a set of similar spacecraft and instruments to the NOAA civilian programme.
At any time there were two operational NOAA polar-orbiting spacecraft in orbits indicated by the dashed lines in Figure 9 which are almost fixed relative to the instantaneous location of the centre of the Earth and the Earth rotates beneath them so that at any position on the Earth one obtains images every 6 hours, see Figure 9. The satellite is moving continuously in one of these orbits and the centres of the circles indicate some of the successive positions of the spacecraft One of these circles indicates approximately the area on the ground which can be 'seen' from the spacecraft travelling, for example, from north to south at this time and is the area for which a ground station would be within range for receiving data transmitted from the satellite. The two solid curves on either side of the orbit indicate the swath that is covered by the satellite in that orbit. By the time the satellite has completed one orbit, that is about 100 minutes later, the Earth will have rotated by approximately 25°. Many of the points on the ground that were beneath one of the swaths shown in Figure 9(a) will no longer be beneath the satellite's swath on the next orbit. Points on the ground are also within range approximately 12 hours later when the satellite is in the same orbit but travelling from south to north. Having two satellites doubles the number of images collected per day. There is also a variation in the amount of overlap of adjacent orbits depending on latitude, with the overlap becoming very great near the poles.
Although the NOAA polar-orbiting programme has two spacecraft in polar orbits 90 degrees apart and, while this is clearly of some use in weather forecasting, it is rather limited. To follow the rapid development of fast changing weather systems the geostationary weather satellites were developed. As is well known there is a certain height, given as about 35,786 km above the surface of the Earth (taken as mean sea level), at which a spacecraft orbiting in an equatorial orbit has a period of 24 hours and if it is going round the same way as the Earth rotates then it remains fixed above one point on the equator. It is then described as being in a geosynchronous orbit or a geostationary orbit. It is in this orbit that communication satellites are 'parked'. Instead of giving one or two images every 6 hours a geostationary satellite can generate images at any time and at any chosen interval. The chosen interval was initially 30 minutes but it now has been widely reduced to 15 minutes. The GOES series, beginning with GOES-1 which was launched by the USA in 1975, is ongoing to this day and a number of other geostationary weather satellites have been launched by various different countries since then. Since the satellite is, necessarily, stationary over a particular point on the equator the area that it can 'see' is limited and one needs a collection of, say, half a dozen or so satellites spread around the equator to obtain global coverage, see Figure 10. By having a selection of geostationary satellites distributed around the equator complete coverage of the whole of the surface of the Earth, apart from the extreme polar regions, can be obtained at regular intervals thoughout the whole 24 hours of the day. Data for the missing areas in the polar regions can be obtained from the polar-orbiting weather satellites because their successive orbits are very close together over the poles, see Figure 9.
The USSR/Russian weather satellite programme of satellites, the Meteor satellites, was developed in the 1960s, not surprisingly in parallel with the US programme, although information about the Meteor programme was difficult to obtain outside the USSR. Not surprisingly the data has not been used seriously outside the USSR (or Russia).
The last NOAA satellite before 1978, which was NOAA-5, carried a two-channel scanner, the VHRR (Very High Resolution Radiometer) with one visible channel and one thermal infrared channel. From photographic products from the thermal channel it became obvious that structure in the temperature of the surface of the sea could be seen very clearly. Figure 11 shows an early unrectified image of the Gulf Stream obtained from the AVHRR. The general behaviour of the Gulf Stream had been known previously from in situ data gathered by ships but the details of the temporal variations (eddies) only became available from the infrared satellite data. Another feature that was known in general terms was the existence of fronts between deeper thermally stratified (warmer) water and more shallow tidally mixed (cooler) water where the temperature is independent of depth. These fronts (sharp boundaries between warmer and colder water) appear in certain places in the summer, disappear in the winter and reappear in the same places the following summer, see Figure 12. Their existence had been known before satellite data became available but the thermal infrared data enabled them to be studied in detail and an explanation to be given for why they always appeared in particular locations (Simpson 1981).

Landsat
On 23 July 1972 the Earth Resources Technology Satellite was launched by NASA ((US) National Aeronautics and Space Administration) This satellite was quickly relabelled Landsat-1. Landsat-2 was launched in 1975 and over the years there was a succession of subsequent Landsats launched until the most recent satellite in the programme, Landsat 8, was launched on 11 February 2013. The Landsat programme is the longest-running enterprise for the acquisition of satellite imagery of the land surface of the Earth. To date the instruments on the Landsat satellites have acquired millions of images. For many years the cost of Landsat data was too high for some potential programmes. But recently it has been decided to release the historic archive of Landsat data for free and this has made possible work such as that of Gong et al. (2013) on a global 30 m resolution land use/land cover database (see Section 3.4). Between 1972 and 1978 Landsat data was mostly used in land-based applications. It was largely handled by visual interpretation of photographic products but was beginning to be handled digitally, although without the benefit of the digital image display devices that we nowadays take for granted.

Early satellite technology
The common orbits for Earth observation satellites are either on the one hand polar orbits (Sun synchronous orbits) as in the case of the NOAA/TIROS and Landsat series of satellites, or on the other hand geosynchronous/geostationary orbits, as in the case of GOES etc. Other orbits have been used. While Landsat-1 carried a vidicon camera it also carried a scanner, the multispectral scanner (MSS) while several later satellites in the programme did not carry a vidicon camera but carried an MSS or an improved scanner. TIROS-1 to TIROS-10 (1960TIROS-10 ( -1966 carried vidicon cameras but over the next decade or so vidicons gave way to scanners. The principle of a scanner is illustrated in Figure 13. An image is reconstructed in a number of spectral channels or bands, where each band contains data from a particular wavelength range. The separation into spectral channels is achieved by splitting the incident light into its various component wavelengths using a number of filters or using a diffraction grating. Mechanical scanners have now been replaced by push broom scanners which are essentially digital cameras which obtain images at intervals as the satellite travels along. Having generated the output from an instrument, vidicon or scanner, there was the question of how to transmit the data to Earth. An important feature of the NOAA polarorbiting spacecraft was their direct broadcast facility (Schwalb 1982). As the spacecraft collected its data in orbit the unencoded data was transmitted immediately at two frequencies VHF and UHF and this transmission could be received by anyone with the appropriate receiving equipment located within the current ground footprint, see Figure 9. The signal, essentially a voltage, i.e. an analogue signal, could be transmitted directly to the ground or recorded on board for subsequent transmission to the ground. Alternatively the output could be digitized and transmitted directly as a digital transmission or stored on board for subsequent transmission to the ground. In the case of TIROS-N and the later NOAA polar-orbiting satellites there were two radio transmissions, one at VHF (frequency 137.50/137.62 MHz) and one at UHF (frequency 1698.0/1702.5/1707.0 MHz) (Schwalb 1982). For the VHF transmission the only receiving antenna needed was a length of wire. For the UHF transmission one needed a parabolic dish reflector mounted so as to track the satellite as it rises above one horizon until it disappears below the opposite horizon. Because it was regarded as a meteorological system the data were freely available to anyone, anywhere in the world, who had the necessary equipment to receive it. The data were not encrypted, and the format specifications were freely available. The orbital parameters were also available so that anyone with a tracking antenna for the UHF transmission could determine where and when to point their antenna and track the satellite. This was not a totally trivial engineering operation but the former Electrical Engineering Department of Dundee University had mastered this by 1976 and has been recording and archiving the data ever since. Since the direct broadcast facility enabled anyone anywhere to access the NOAA data, this partly explains why the AVHRR data have become so widely used in many different applications (see Section 3.2). Off the shelf receiving stations can now be purchased from any one of several suppliers.
In the early days the number of receiving stations availahle to the operator of a polarorbiting satellite system, such as NOAA or Landsat, was very limited which meant that data could only be downloaded when a satellite was within range of one of these ground stations. The purpose of on-board recording of the data was to allow the spacecraft operator (NOAA, USGS/NASAthe operator of Landsat) to recover data from areas out of sight of the limited number of receiving stations possessed by the spacecraft operator. The recorded data from an inaccessible area could then be downloaded when the spacecraft was within range of a receiving station. The onboard recording capacity was limited and so the recording of data was programmed by ground control. In the case of the NOAA polarorbiting system there was not sufficient recorder storage to accommodate all the five spectral bands of the AVHRR data from a complete orbit. The full spatial resolution data in all five bands was transmitted in real time at UHF and so could be received within the instantaneous circle of view and so only some of the data could be recovered according to scheduling by mission control. A spatially and spectrally degraded version of the data from the whole orbit was recorded and downloaded when the satellite passed over a NOAA ground station. In the case of Landsat after the early days of the system a number of local receiving stations were established by local agencies and licensed by USGS/NASA to receive the data for a fee, which the receiving station hoped to recover from sales of the data to customers as image products or as digital data. By comparison with the case of Landsat and NOAA, the data from the Meteor Programme were not generally available and consequently the data have not been used as widely as the AVHRR or Landsat data.

Atmospheric sounders, TOVS
In addition to carrying scanning radiometers, the NOAA series of polar-orbiting spacecraft have carried atmospheric sounders; the TOVS (TIROS Operational Vertical Sounder) was the version that was operational in 1978. A sounder is an instrument designed to obtain vertical profiles of pressure, temperature and, later humidity and more recently concentrations of trace gases in the atmosphere as functions of height. The intention is to obtain from a satellite the information obtained from radiosondes, i.e. weather balloons, which have traditionally been launched at fixed times (0000, 0600, 1200, 1800 GMT (Z)) from various stations around the world. While the spatial distribution of radiosonde stations is very non-uniform over the surface of the Earth, the sampling by satellite sounder is much more uniform. On the other hand, while the radiosonde measures directly the desired atmospheric parameters, the satellite sounder makes indirect measurements and a complicated inversion process is needed in order to determine the required atmospheric parameters and the inversion process needs to be validated. The choice of 1978 as a watershed was based on the new satellites and instruments that were flown in that year. They were: (a) TIROS-N which carried principally • The AVHRR (Advanced Very High Resolution Radiometer).
(b) SEASAT which carried the following instruments: • A radar altimeter to measure spacecraft height above the ocean surface • A scatterometer to measure the near-surface wind speed and direction • A Scanning Multichannel Microwave Radiometer (SMMR) to measure sea surface temperature at very low spatial resolution. • A visible and infrared (presumably scanning) radiometer to identify cloud, land and water features • A Synthetic Aperture Radar (SAR) and (c) NIMBUS-7 which carried the following instruments: • The CZCS (Coastal Zone Colour Scanner) • another SMMR.
We shall consider the importance of these systems in part 3.
1978 was an interesting year in terms of satellite data sources for different applications. There was an established set of sources of data for meteoro1ogical applications from polar-orbiting and geosynchronous satellites. There was also an established source of data for land-based applications from the Landsat programme (Landsat 3 was launched in 1978) but there were at that time no other rival sources of land use/land cover data from remote sensing satellites. The first rival to appear on the scene was the French SPOT (Satellite Pour l'Observation de Ia Terre (Satellite for the observation of the Earth)). SPOT-1 was launched on 22 February 1986 and transmitted multispectral data at 20 m resolution and panchromatic data at 10 m resolution. The SPOT system continued with SPOT-2 (launched on 22 January 1990) and on to SPOT-7 (launched on 30 June 2014). Following SPOT-1 many other countries launched their land observation satellite systems at various improved spatial and spectral resolutions, with various levels of operationality and various levels of ease of access to the data by potential users. Herbert Kramer struggled manfully to keep an updated list of such systems (see Kramer 2014). Polar-orbiting satellites reigned supreme as the source of remote sensing data for the land for a long time, but recently they have been challenged by Google Earth, which of course uses satellite data but also incorporates data from other sources, (see e.g. , and by drones, see Section 3.7.

Data handling and interpretation in 1978
We have already discussed in Section 1.5 the question of data recovery from the instruments on spacecraft. We now consider the handling of the data at a receiving station and its processing for the delivery of products to users. By 1978 satellite scanner data had been digital for some years and it had been delivered to the customer as a photographic product by the receiving station or as digital data on magnetic tapes which could be read on a mainframe computer. We now consider how the data were handled at the receiving station when digital image processing systems were almost non-existent. The former Electrical Engineering colleagues in Dundee University installed what was possibly the first local civilian receiving station in the world for the data from the NOAA polar orbiting satellites in about 1976 and began seriously archiving the data from TIROS-N and other satellites in 1978. In those days photographic images were generated in Dundee on old photofacsimile machines salvaged from newspaper offices. The images were not geometrically rectified to geographical coordinates; instead transparent sheets with (curved) grids of latitude and longitude appropriate to the particular orbit were placed on top of the photographic paper or negative film while it was being exposed, see Figure 14. Elsewhere film writers were developed to take digital data as input and the signal was used to modify the intensity of a spot of light that scanned photographic film or paper to produce a black and white, or greyscale, image. It was, and still it, popular to produce false colour images where data from three different spectral bands are applied to the three colour emulsions of a photograph or to the three colours of a video monitor. One particular false colour composite proved to be very popular with early Landsat data where the following assignment of the Landsat MSS (Multispectral Scanner) bands is used: The origin of this colour scheme seems to have been a wish to replicate the use of false colour infrared film where healthy vegetation appears red. In 1978 most of the work using satellite data from Landsat or meteorological satellites was done by photointerpretation.
Digital data was distributed on 2400 ft (approx 730 m) half inch (12.7 mm) width 1600 bpi magnetic tapes, on reels of diameter 10 ½ inches (approx. 26.7 cm) see Figure 15. These tapes will be discussed further in section 3.3.

Digital image processing. Example of the Tay Estuary
Several examples of digital processing of Landsat data between 1972 and 1978 presented at the Symposia of LARS (Laboratory for Applications of Remote Sensing) at Purdue University will be found on their website https://docs.lib.purdue.edu/lars_symp/(accessed 9 November 2018). The pioneer textbook on the digital processing of remote sensing data by Swain and Davis was published in our landmark year of 1978 (Swain and Davis 1978). We consider an example which is very much a personal recollection from 1978. The object is to illustrate what life was like by considering one simple project we carried out in 1978 or so which these days would be quite trivial with modern computers and image processing software. It is described by Cracknell et al. (1982)_ It was a simple photointerpretation and mapping project which might serve as a student exercise to be done in one or two afternoons in a modern remote sensing undergraduate course, and which if submitted for publication now would almost certainly be rejected as 'just a student exercise'. It related to the area around Dundee, Scotland, and a false colour composite of the study area is shown in Figure 16 and a sketch map of the study area is shown in Figure 17. The study area is approximately 60 km × 60 km. The work concerned the estuary of the River Tay and mapping the sandbanks which are submerged at high tide but which become exposed at low tide. We obtained ten scenes from our Landsat distribution station and these scenes were delivered on ten 2400 ft computer compatible magnetic tapes. The question then arose of how to display an image. There was, at that time, no civilian image display or image processing system available in the UK. So we had to buy time on the purpose built DIBIAS system at the DLR (formerly the DFVLR) in Oberpfaffenhofen, near Munich, Germany. It cost £100 (in 1978 money or about £400 in today's money), i.e. about US$500 an hour! Plus of course the air fare and the hotel costs. So I had a cardboard box with these 10 tapes and I flew from Dundee to Munich and spent 2 or 3 days there. We produced a few 70 mm colour transparencies which provided the images for some of the figures in the IJRS paper which I have quoted and which are reproduced here. We selected two areas, the first is the sandbanks at the mouth of the River Tay, see Figure 18 .and the second is the upper estuary, see Figure 19.
Then we set about plotting the sandbanks shown in these two slides on a proper map projection. In other words we had to do a geometrical rectification of the raw image, which these days is a totally trivial operationpress the correct button on an image processing system. However, such software (and hardware) simply did not exist in 1978. The images I have shown you are unrectified. We had to choose a transformation

EDITORIAL
where E and N are the easting and northing and S and P are the scan line number and pixel number, respectively, of a point in the image. We only retained the linear terms because we were dealing with a very small area. We had to find our ground control points (gcps) in the raw data and in the map, see Table 1. We identified 30 ground control points (gcps) on the 1:50,000 Ordnance Survey map visually and that was easy enough, but then there was the question of how to identify the scan line and pixel numbers of these gcps. All we had was a mainframe computer onto which we loaded our magnetic tapes. So we printed out on line printer paper the digital numbers line by line. We identified the ground control points from the digital values in band 4 (the near infrared band, 0.8-1.1 μm). These gcps were spread out over about 274 scan lines and over 500 pixels. This line printer paper was spread out over the floor of the lab. To cover the whole area would have needed around 1700 sheets of line printer paper so we limited ourselves to areas where we knew (roughly) there would be some gcps. It was extremely tedious but provided good master student project material. The rectification transformation was applied to each gcp and a least squares fit was used to determine the best values of the coefficients in the transformation equations. Indeed we were so naïveor lacking in photogrammetric experiencethat we initially assumed that we could just use the exact number of gcps (6) that would be enough to give an exact fit to equations (2) and (3). We learned the hard way that because of errors in the data it was necessary to determine the best least squares fit. Then the contours identifying the boundaries between the dried out sand and the water were plotted on these same sheets and the transformation equation were applied to project these contours onto the map projection. The results are shown in Figures 20 and 21.
This example has been included in considerable detail because the basic principles of operations which are now so commonly done automatically on a modern image processing system can so easily be over-looked now. As we have already seen, 1978 was chosen as the break point. What has happened since then is so great that we cannot possibly cover it in the space available. What we can only do is to pinpoint some of the major developments that have occurred since then. The Achilles heel of much that was going on in 1978 hinged on the questions of the limited spatial, spectral and temporal resolution of remote sensing data. Many potential applications of remote sensing were prevented because of the limitations imposed by one or more of these parameters and restrictions imposed by data availability, data storage and data handling limitations.
The topics to be covered in this part of the editorial include: • The AVHRR • Data storage, computer power and software developments   These are wide ranging, but not exhaustive.

The AVHRR
I will indulge myself for a little in my own particular hobby horse of the AVHRR. This is not, I think, just in terms of the instrument itself but because it illustrates the importance of 'operationality' and because it has acted as a proof of concept for so many later instruments and missions. I was reminded recently of a paper that I wrote over 15 years ago called, 'The exciting and totally unanticipated success of the AVHRR in applications for which it was never intended' (Cracknell 2001). The AVHRR was conceived in the mid 1970s as a scanning radiometer to be flown on the NOAA polar-orbiting satellites for meteorological purposes, including the study of sea surface temperatures; the first in the series was launched in 1978. But since thenfor a whole variety of reasonsit came to be one of the most valuable sources of data for non-meteorological purposes in a whole variety of environmental scientific and management contexts.
Whereas its predecessors, the various VHRRs, had just two spectral bands or channels, one visible and one thermal infrared, when it came to the design of the AVHRR it was  decided that the new instrument would have five channels, of which two would be thermal infrared channels with slightly different wavelengths. The reason for this was so that the data from these two channels could be used to make atmospheric corrections to the satellite-derived sea surface temperatures. The wavelengths of the five bands (channels) of the AVHRR are shown in Table 2.
Over 20 years ago a book of mine was published with the accurate, but not very exciting, title 'The Advanced Very High Resolution Radiometer' (Cracknell 1997). I would have liked to have used the title of the paper I have just mentioned as the title for the book but the publishers persuaded me that it would not be a very good title for a serious book. It was, however, a very accurate expression of how I saw the AVHRR at that time.
The name. Let's just reflect on that for a minute; it probably never was very appropriate. By today's standards it is not at all advanced and probably never was. It was based on technology that was already well tried and tested in the 1970s. Unlike SEASAT, which was also launched in 1978 it was not at all adventurous. Even by the standards of 1978 it really was not very high-resolution at all. Consider that its spatial resolution was about 1 km whereas the Landsat MSS which had been in space from 1972 had a spatial resolution of 80 m. It was probably not even a very good idea to call it a radiometer, but at least it could better be described as a scanning radiometer or a multi-spectral scanner.
I don't want to make an issue of that. The instrument was itself a great success. I say 'the instrument' but, unlike many systems put into space, it was not just one instrument but a whole series of nominally identical instruments. Although a few improvements were made in the later instruments in the series they always maintained compatibility with the earlier instruments in the series.
So, first of all * The AVHRR was an operational system. There were always two spacecraft in orbit, in two mutually perpendicular planes as we have seen Figure 9. When one of them failed there was always another version of the instrument on the ground waiting to be launched as a replacement. * There is an archive of data going back over 40 years, though with other instruments, Metop and VIIRS, providing data for the most recent years. Thus people have known that it is worthwhile to develop hardware and software to handle the data for particular applications because they could be assured of continuity of data supply. * Furthermore the calibration coefficients of the various instruments in the series are accurately known, as well as the changes in these coefficients with the age of each instrument. Thus any instrumental effects can be eliminated so that there is an unrivalled 40-year database for the study of various long-term environmental trends.
* The data were (generally) free of serious noise problems. Channel 3, the middle infrared channel (3.5-3.9 μm) was originally unique in that during the day the signal is composed of both reflected and emitted thermal infraraed radiation. At night, of course, there is only the emitted radiation. Channel 3 and the thermal infrared channels were calibrated in-flight. The visible and near infrared channels were calibrated pre-launch and, because of the importance of the data, there were extensive post-launch calibration programmes for these channels.
* The data were freely available. The direct broadcast facility (see section 1.5) meant that the data could be captured by anyone with suitable and not enormously expensive equipment. The data was not encrypted and there was no licence fee to be paid to the operator (NOAA) on the principle that countries freely exchange meteorological data on a routine basis. Or the data could be obtained from NOAA.
The AVHRR was originally designed for meteorological purposes, i.e. to provide additional information to forecasters, supplementing the data from ground stations and radiosondes, in the public presentation of weather forecasts, for teaching students of meteorology, and in meteorological research. But in the end it proved to be applicable in very many other fields, see Table 3. One of the earlier and more spectacular successes, Figure 22. Moore's law. The best fit straight line was drawn by eye by the present author. apart from sea surface temperatures, was in providing normalized difference vegetation index (NDVI) data for the whole global land surface, see for example Justice et al. (1985).
The successor to the NOAA polarorbiting spacecraft system was a joint system in which one of the spacecraft orbits was taken over by EUMETSAT with its Metop system (first launched in October 2006) and the other was retained by the US for its VIIRS (Visible Infrared Imaging Radiometer Suite) (Hutchinson and Cracknell 2006) which was finally launched in October 2011. Metop included a late version of the AVHRR, the AVHRR/3, while VIIRS is a 22-band scanner where some of the bands are (more or less) the same as the bands on the AVHRR. The AVHRR dataset starting from 1978 thus continues until today and is an enormously valuable resource for studies on global or regional trends.

Data storage, computing power and software systems
These are headings which describe things that have expanded out of all recognition over the last 40 years, but it is very difficult to quantify this expansion in each of these areas. A convenient way to get some appreciation of this expansion is to consider a graph which describes Moore's Law, see Figure 22.
This is a graph which shows the number of transistors which it has been possible to fit on to an integrated circuit chip, as a function of time. One should not be misled by the fact that this looks like a simple straight line because it is a log-linear plot. Let us consider what this means. Suppose that we take the first cycle, from 1000 up to 10,000, on the y axis which on a page occupies, say, 1 cm. The top of the y axis corresponds to 10 10 so that if we had used a linear plot on the y axis the top of the axis would be 10 6 × 1 cm away from the origin, or about 10 km away from the origin. The y axis of this graph would be 10 km long. This graph just relates to the number of transistors on an IC chip so that it does not translate directly into any of the three things, data storage, computer power or software but it does give some sort of feel for what is involved.
Data storage has been revolutionised. We consider the 2400 ft magnetic tape of the 1970s again. They are 9 track tapes so that at any position on the tape there are 9 bits which contain one byte, i.e. 8 bits of data (0-255), plus one parity bit (value 0 or 1 if the is even or odd); the parity bit is there as a check on the integrity of the data. The tapes contain 1600 bytes per inch. 2400 ft therefore contain 2400 × 12 × 1600 bytes = 46.08 Mbyte. In terms of modern day storage available to the general public this means that one 64 Gbyte memory stick (flash disk) is equivalent to (64 × 1000)/46.08 tapes or approx. 1390 of those tapes.
The processed data from the 7½ years of operation of the CZCS were archived at the Goddard Space Flight Center and originally stored on 38,000 nine track magnetic tapes, and, of course, later migrated to a more compact storage medium. I remember one time going to NOAA's data centre and seeing their archive of early weather satellite data, maybe 10 −15 years or so on 2400 ft magnetic tapes and it occupied a building the size of, say, a couple of tennis courts. This presents not just a storage problem but an indexing/cataloguing problem too. And, of course, the tapes would sometimes deteriorate and became unreadable. Eventually such archives were transferred to other media. Suppose we took these 38,000 tapes and stored them on racks, say 2m high and 2m wide, with 6 shelves and 50 tapes on a shelf and with 60 cm access way between pairs of racks. Suppose we then added all the Landsat, NOAA and GOES (geostationary weather satellite) tapes we could more or less fill an (empty) Olympic size swimming pool. On modern media it would all go on to one or two large external storage disks attached to a laptop. There is no guarantee that Moore's Law can be projected indefinitely into the future.
This discussion was intended to convey something of the vast increases in data storage that have occurred in the last 40 years. One might attempt to do the ame sort of thing for computing speed/power and for software. Of course, Moore's Law is relevant not just to remote sensing but to everyday life. And, of course, then there is the internet too.

Global studies, modelling
The role of remote sensing to provide input data to weather forecast, ocean current and climate models has developed extensively over the last 40 years. There was some weather forecast modeling in hand in 1978 and out of weather forecast modeling there grew climate modeling. As the models became more sophisticated so more types of remotely sensed data were fed into the models. Or perhaps one should put this the other way round. The more different types of remotely sensed data became available the more ways were found to ingest the data in to these models. We can illustrate this by considering the question of global land cover/land use maps. This is usefully summarized in the introduction to a rather important paper on finer resolution global cover mapping by Gong et al. (2013) (with about 50 co-authors) The Gong et al. (2013) paper cites six stages in the development of global land cover maps at increasingly fine scale spatial resolution before their own which involves work done on the global Landsat database. The first two started at 1 km spatial resolution and they were based on NDVI (Normalised Difference Vegetation Index) data derived from the AVHRR, see section 3.2. Later ones involved MODIS data at 500 m resolution and MERIS data at 300 m resolution: • 1 km IGBP (International Geosphere-Biosphere Programme) AVHRR NDVI data • 1 km University of Maryland AVHRR NDVI data • 1 km SPOT NDVI data • 500 m MODIS (Moderate Resolution Imaging Spectrometer) data • 300 m GlobCover MERIS (Medium Resolution Imaging Spectrometer) data • 1 km MODIS land-cover map MODIS data What Gong et al. did was to make use of the release of the Landsat archive to derive a 30 m resolution Finer Resolution Observation and Monitoring of Global Land Cover (FROM-GLC) database. This involved using 8929 TM or ETM+ scenes from 1984 to 2011 to produce a global land cover map, see Figure 23. The data behind these four products is at 30 m resolution. While differences between the products of the four classification methods used can be detected, it is worth noting that these differences are small, which is consistent with the general view of Li et al. (2014) that it does not matter very much which classification method is used so long as one has good training data.

Small satellites
Small satellites provide an important source of remote sensing data. I would like to quote a comprehensive article on this subject by Kramer and Cracknell (2008). That article is mostly the work of Herbert Kramer with whom I had the honour to collaborate in writing it. He did all the work behind it. We wrote this article to celebrate the 50 th anniversary of Sputnik, which was launched in 1957. It is difficult to know how to condense into a few paragraphs an article of over 50 pages without being totally trivial. Sputnik was small and its successors were small but, as time went on, the satellites that were flown were bigger and bigger. They became more expensive and took a long time to design, build and launch and the development was in the hands of a few big and well-funded organisations, see Figure 24. Moreover, in the early days these activities were confined to just a few players, NASA in the US, the (former) Soviet Union and the European Space Agency with a few big countries, India, China, Brazil, trailing along behind.
However, there were problems with these big satellites each of which carried several different instruments. Compromises often had to be made between the requirements of different instruments on a large satellite. Moreover, if the satellite failed then it meant that many instruments, and their projects, costing large amounts of money and of human effort were all lost. So we are now seeing more small satellites, each of which is dedicated to a particular mission objective and carries a single instrument. There seems to be a general consensus to classify satellites by their mass: Picossatellites (0.1-1 kg) Nanosatellites (1-10 kg) Microsatellites (10-100 kg) Minisatellites (100-1000 kg) Large satellites (>1000 kg) Already in the earIy 1960s, the first spacecraft of a family of tiny communication satellites, referred to as OSCAR (Orbiting Satellite Carrying Amateur Radio), was designed and developed by a California-based group of amateur radio enthusiasts. OSCAR-1 was the first battery-powered amateur satellite. It had a mass of 4.5 kg and was launched on 12 December 1961 (piggyback to the Discoverer 36 spacecraft of the USAF (US Air Force)). OSCAR-1 orbited the Earth for 22 days, and over 570 amateur radio operators in 28 countries reported receiving its simple 'HI-HI' Morse code signals.
In 1969 the Radio Amateur Satellite Corporation (AMSAT) was founded in Washington DC as an educational organization to give amateur radio satellites an international base. Some OSCAR family advancements were achieved but like many new developments, the small satellites of the early space age were simply overlooked by the established space industry and the space agencies, as well as by the media, who in the 1960s were more concerned with the Cold War and the race to the Moon. The international amateur radio satellite community and associated universities must be regarded as the true pioneers of small satellite technology. They faced very real constraints, regarding financial support and technical resources, to evolve a highly pragmatic and cost-effective philosophy for small-scale space engineering as the only practicable means to gain access to space. Figure 25 shows an example from the OSCAR programme and gives an idea of the size of the spacecraft involved.
A key player in the field of small satellites has been SSTL (Surrey Satellite Technology Ltd) who have built and launched a long list of small satellites, see Table 3 of Kramer and Cracknell (2008). SSTL and EADS Astrium have played a key role in technology transfer to various developing countries, see section 6.1 of that article, also Sandau, Rösser, and Valenzuela (2014).
An important development in small satellites was the CubeSats which owe their origin to Professors Jordi Puig-Suari of California Polytechnic State University and Bob Twiggs of Stanford University who proposed the CubeSat reference design in 1999 with the aim of enabling students to design, build, test and operate a spacecraft. The CubeSat, as initially proposed, did not set out to become a standard; rather, it became a standard over time by a process of emergence. Generally CubeSats have piggy-backed on launches of major spacecraft, the first ones being launched in June 2003. As of 28 October 2018, 878 CubeSats had been launched as well as two interplanetary CubeSats according to the 'Nanosatellite and CubeSat Database' http://www.nanosats. eu (accessed 13 November 2018).) A CubeSat is a type of small satellite for space research that is made up of multiples of 10 cm × 10 cm × 10 cm cubic units. CubeSats have a mass of no more than 1.33 kg per unit and often use commercial off-the-shelf (COTS) components. There is a long article on CubeSats in Wikipedia.

Oceanssatellite oceanography
We have already discussed in section 1.3 the role of the NOAA VHRR analogue thermal infrared data in studies of sea surface temperatures. In 1978 the advent of the calibrated digital thermal infrared data from the AVHRR enabled the quantitative determination of sea surface temperatures on a routine basis with, very soon afterwards, the opportunity for making atmospheric corrections using the split window (two channel) data. This was really the beginning of satellite oceanography, a whole new academic discipline which was only in its infancy in 1978 but which has now grown to be a major component of oceanography. This is important because the oceans are very large and rather inaccessible except for the limited areas studied by research cruises on oceanographic survey vessels and some data acquired from commercial or naval shipping. There is a very early book giving some idea of what is involved by Robinson (1985). The second, more comprehensive, edition of this book (Robinson 2004) runs to 669 pages, and in many areas could now be extended further, gives some idea of how oceanography was revolutionised by the arrival of satellites.
1978 was a key year for the extension of satellite oceanography beyond the early studies of sea surface temperatures from thermal infrared satellite data which we have described in section 1.3. and this was because of the launch of SEASAT and the flying of CZCS on NIMBUS-7, as we have already mentioned in Section 2.1.
SEASAT was the first Earth-orbiting satellite designed for remote sensing of the Earth's oceans using active microwave wavelength instruments. The mission was designed to demonstrate the feasibility of global satellite monitoring of oceanographic phenomena and to help determine the requirements for an operational ocean remote sensing satellite system. Specific objectives of SEASAT were to collect data on seasurface winds, sea-surface temperatures, wave heights, internal waves, atmospheric water vapour, sea ice features and ocean topography. Many later remote sensing missions owe their existence to the successful proof of concept legacy of SEASAT. The mission only lasted 106 days (launched 27 June 1978, died 10 October 1978 but the important thing about SEASAT was that it was a pioneering system with proof of concept active instruments operating in the microwave part of the electromagnetic spectrum. The success of SEASAT, with its radar altimeter and scatterometer, marked the beginning of studies of the geoid and near-surface wind speeds. It is probably fair to say that people did not know what to expect from the synthetic aperture radar (SAR) which it carried. It was also a long time before digital processing facilities were set up to process the SAR data and the initial processing was done using an optical processor. The SAR produced a few surprises and one conspiracy theory. The conspiracy theory is that once it was realized that SEASAT was able to detect the wakes of submerged submarines, a discovery not anticipated before launch, the military shut SEASAT down, with a cover story of a power supply short. A number of later systems owe their heritage to SEASAT.
The CZCS flown on NIMBUS-7 performed for ocean colour a similar role to that of SAR on SEASAT. The CZCS was a six channel scanning radiometer with a resolution of 800 m. Many of the problems surrounding early satellite remote sensing revolved around the trade-offs between temporal, spatial and spectral resolution. Landsat had the advantage of spatial resolution (80m and later 30m) but suffered from poor temporal resolution (once in 18 days, later once in 16 days, but less in cloudy areas.) There had been occasional Landsat images acquired of marine algal blooms, see Figure 26, which had caused great excitement among oceanographers. Landsat was very good for slowly varying situations, geology, deserts and forestry for example, but achieved only limited success with crop studies. The AVHRR generated data several times each day but at much lower spatial resolution (approximately 1 km). Marine algal blooms had been known for many years but the satellite images were spectacular for showing the extent of a bloom at a particular time. For any serious study of the occurrence of algal blooms their growth, development, evolution and decay, which whole process occurs over a few weeks, one would require a time sequence of far more frequent images than is available from Landsat. One of the principal objectives of the CZCS was to monitor algal blooms with the benefit of the temporal frequency of the NOAA satellite but with enhanced spectral resolution, but bearing in mind that the spatial resolution of Landsat was not necessary and that lower spatial resolution would be adequate for many purposes. The CZCS was predominately designed for water remote sensing.
As was the case with the thermal infrared images which displayed the variations in the sea surface temperature in far more detail than was previously possible without satellite data (Figures 11 and 12), so also the CZCS enabled the spatial extent and changing signature of algal blooms to be observed in a level of detail that was previously impossible on oceanographic cruises. Nimbus-7 was launched on 24 October 1978, and CZCS became operational on 2 November 1978. It was only designed to operate for one year (as a proof-of-concept mission), but in fact it remained in service until 22 June 1986. Its operation on board Nimbus-7 was limited to alternate days as it shared its power with the passive microwave Scanning Multichannel Microwave Radiometer (SMMR). The development of the CZCS enabled sea surface data to be obtained with a spatial, temporal and spectral resolution that was appropriate to following the development of marine algal blooms over periods appropriate to their appearance, development, evolution and final decay, i.e. a few weeks or a month or two. It also became possible to obtain data on marine chlorophyll and suspended sediment concentration which also vary rather rapidly. The most significant product of the CZCS was its collection of so-called ocean colour imagery. The 'colour' of the ocean in CZCS images comes from substances in the water, particularly phytoplankton (microscopic, free-floating photosynthetic organisms), as well as inorganic particulates. An example of an ocean colour image of the sea around Tasmania derived from CZCS data is shown in Figure 27; this shows (colour coded) chlorophyll concentrations in the sea. Because ocean colour data is related to the presence of phytoplankton and particulates, it can be used to calculate the concentrations of material in surface waters and the level of biological activity. Phytoplankton remove carbon dioxide from the sea water during photosynthesis, and this forms an important part of the global carbon cycle; therefore satellitebased ocean colour observations provide a global picture of life in the world's oceans, because phytoplankton are the basis for the vast majority of oceanic food chains. By recording images over a period of years, a better understanding of how the phytoplankton biomass changes over time can be obtained. One feature of ocean colour studies, as initiated by CZCS, is that because the reflectivity of the ocean is much lower than the reflectivity of the land, the satellite-received signal contains a proportionately larger component arising from the atmosphere and so in quantitative studies of CZCS data atmospheric corrections to the data are much more important than for land-based studies.
The SMMR was a microwave scanning radiometer (resolution 60 km × 60 km) which was flown on SEASAT and NIMBUS-7. It was the successor to several previous microwave radiometers which had been flown on earlier NIMBUS satellites. The SMMR mission objective was stated to be to obtain ocean circulation parameters such as sea surface temperatures, low altitude winds, water vapour and cloud liquid water content, sea ice extent, sea ice concentration, snow cover, snow moisture, rainfall rates, and differentiation of ice types.

UAVs, drones
The first important source of remote sensing data acquisition was the use of air photos, and the first revolution after that was the addition of data from Earth-orbiting satellites and geostationary satellites. It is probably no exaggeration to say that the next revolution in the acquisition of remote sensing data has been the arrival of the UAV (Unmanned Aerial Vehicle, drone, or whatever else we choose to call them) on the scene. Of the three needs of spatial, spectral and temporal resolution the UAV particularly addresses the questions of spatial and temporal acquisition. Once you have a UAV you can acquire data where and when you want it, and reasonably cheaply. Recall for a moment Neubronner's pigeons. They obtained photographs of the ground but there were several obvious problems, including • no control over the flight path; • no control over the choice of the site photographed; • no record of the location of the pigeon/camera when the photograph was taken; • no GPS to identify the location of the pigeon/camera when the photograph was taken; Figure 28. Numbers of airprox events in UK air space in recent years. (UK Airprox Board).
• no record of the orientation of the camera.
Recently a number of developments came together to solve all these problems. The flying of model aircraft as a hobby has been around for decades but it is only very recently that a number of developments made this possible as a viable source of remote sensing data. The development of lightweight drones as platforms, the developments in lightweight, long-life or rechargeable batteries as power sources to fly the drones, the advent of GPS, the development of low-cost lightweight controllers using GPS and the development of lightweight cameras have all contributed to the viability of drones as an important source of remote sensing data. People have now started to hold conferences on the use of UAVs in remote sensing and in 2017 we did a special triple issue of nearly 1200 pages of the International Journal of Remote Sensing on UAVs for environmental applications ('Special Issue: Unmanned aerial vehicles for environmental applications' International Journal of Remote Sensing, 38, nos 8-10, 20 April -20 May 2017, pages 2029-3202) and covering a wide range of applications. We have just produced another special issue of the International Journal of Remote Sensing on drones ('Special Issue: Unmanned Aerial Systems (UAS) for Environmental Applications' International Journal of Remote Sensing, 39, nos 15-16, August 2018, pages 4845-5595) and we have also just introduced a special section on drones in remote sensing within the regular issues of this journal. There are, of course, several types of drone, the helicopter principle, the fixed wing system and one or two other less common ones.
The popularity of UAVs has suddenly taken off. Anyone can buy a UAV over the counter or from an online supplier. People are doing that and they are then flying them around with no licence and with no training. Sooner or later there is going to be a nasty accident. There have recently been a number of near misses between a drone and a civilian aircraft in various different countries. Such an incident is called an airprox, which is defined as 'a situation in which, in the opinion of a pilot or a controller, the distance between aircraft, as well as their relative positions and speed, was such that the safety of the aircraft involved was, or may have been, compromised.' The number of airprox incidents in the UK has increased alarmingly in the last two or three years (see Figure 28) and the same pattern is developing in other countries too.
A number of incidents involving drones are described in Wikipedia and the incidents show two common themes. One is events involving a drone and a commercial aircraft, which includes 26 events involving 7 countries. The other category of incidents includes more varied situations where only a drone was involved and no aircraft but there was damage to property, injury to someone (a third party) on the ground or the drone crashed and was damaged or destroyed. In all, Wikipedia lists 33 examples of such incidents from 9 countries. These include a wide range of incidents. In some cases the operator (pilot) of the drone was never traced, but in several cases the pilot was identified and in some cases punished. This shows just how widespread the problem is. We face a serious issue.
This brings us to the question of what is the legal situation with regard to owning and operating a drone. An initial attempt to clarify this situation was undertaken in the article by Cracknell (2017). In recent years the massive increase in the availability of drones to the general public has led to a worrying situation where ignorant or careless people, let alone people with malevolent intent, could cause serious damage and even loss of life through irresponsible flying of drones. One can draw the analogy with the invention of the motor car where the current extensive legislation exists all over the world as a result of engineering and social development over several decades. Individual countries are at different stages of controlling UAVs and their use but we are seeing the introduction of requirements for (a) registration of drones by their owners (b) displaying an identification label on the drone itself (c) licensing of 'pilots' of drones. Although drones are unmanned, every drone must have some person who is responsible for its operation and can be regarded legally as its pilot.
The licensing of pilots is going to involve training and testing, much as currently there is a need for training and testing if you want to drive a car or other motor vehicle. So, the advice is if you are going to use a drone for remote sensing make sure that you are doing so in accordance with the law of your own country.

Conclusion
Perhaps the most important observation of all is to note the extent of the human effort that goes into remote sensing. Like the question of data handling it is probably impossible to measure directly how many people are involved and what is the extent of their involvement and we can only really consider indirect indicators. At the outset it was intended to do a fairly systematic survey of the expansion of the numbers of journals and articles published on remote sensing over our 40-year study period to serve as some indicator of the number of people involved. However, this soon turned out to be difficult to quantify and the conclusion was rapidly reached that it was not simple, life was too short and there were more interesting things to do with one's time.
We leave a study of the number of people employed in remote sensing as a project for some aspiring PhD student in the social sciences. There is an interesting survey analysing the literature on global remote sensing trends covering the period 1991-2010 by Zhuang et al. (2013). We simply make the statement that the number of people involved now in remote sensing is enormously greater than it was 40 years ago and just quote one or two illustrative examples and make one or two points. Whatever the absolute change in the numbers of papers in photogrammetry the number of such papers relative to papers in remote sensing has declined enormously. One can look at the expansion of the number of journals in remote sensing, the number of articles and the number of authors over the last 40 years. In Journal Citation Reports published in June 2017 there are 29 journals listed in the field of remote sensing. Some journals are new since 1978, while others are long-established journals which have changed their names since 1978. Good examples are provided by a number of photogrammetric journals. It will be recalled that we said that the science of photogrammetry was well established in 1978. Thus the ISPRS Journal of Photogrammetry and Remote Sensing was formerly Photogrammetria, which published volume 34 in 1978. The Photogrammetric Record already existed and has not changed its name; it published volume 9 in 1978. Photogrammetric Engineering and Remote Sensing was formerly Photogrammetric Engineering which published volume 44 in 1978. Then there are some remote sensing journals which already existed in 1978 and have not changed their name at all; one example would be Remote Sensing of Environment which published volume 7 in 1978 and continues under the same name. Another example of a journal that has not changed its name is the Canadian Journal of Remote Sensing which published volume 4 in 1978. Finally some journals were just new, since 1978, for example such as the International Journal of Renote Sensing with volume 1, 1980, Remote Sensing with volume 1, 2009, and Remote Sensing Letters, which was spun off from the IJRS in 2010 and so on. So to obtain just a simple number of new journals that had been introduced since 1978 proved to be not such an easy answer to find.
Clearly there has been an expansion in the number of journals on remote sensing, both by new journals being started up and by established journals expanding their scope and changing their names to reflect the inclusion of remote sensing in their scope. But, of course, journals also change their size, usually expanding in the case of an expanding subject like remote sensing. Without attempting to do a systematic survey of the expansion of the total number of papers in remote sensing journal from 1978 and, shall we say to 2018 we just consider one example from the International Journal of Remote Sensing (IJRS). This journal did not exist in 1978 but its first volume was published in 1980 so the first volume can be regarded as based on work carried out in about 1978, see Table 4. One should not attach too much significance to the actual figures but they give a general impression that is reasonably valid.
Like the number of journals or the number of published papers in remote sensing, it is difficult to quantify the increase in the number of people working in remote sensing which has occurred over the last 40 years. Someone may have attempted to research this but I don't know. Clearly the same sort of expansion has occurred as occurred in the number of papers, or even more so given that most papers are in multiple authorship. However, a word of caution is appropriate. Let me take the case of China for example. Most western remote sensing journals have seen a flood of papers coming from PR China in recent years. Part of the reason, of course, is the requirement for a PhD student in China to have published in an international journal in English. We should not, however, assume that the present level of activity in China has grown from a zero base. This would not be true. The number of remote sensing papers from China published in western journals in 1978 was probably virtually zero. This does not mean there was no activity in China. Of course there was the cultural revolution, involving the closure of the universities etc., etc. but in 1984, when that was over I visited China and I distinctly remember there already being a flourishing photogrammetric activity in Wuhan for instance. By 1984 Peking University had established a remote sensing programme and I was invited to give a course on infrared remote sensing there, along with two other colleagues from the UK. On that occasion I took some 2400 ft magnetic tapes with raw AVHRR data on them and there was a computer which could read them and software to extract the digital numbers and print them out in much the same way that I have described already in relation to our work on the Tay Estuary. We were thus able to use digital AVHRR data to study sea surface temperature quantitatively as a student exercise in that course in October 1984 in Beijing.
It is difficult to write a conclusion to a review such as this. Everything that the author wishes to say has already been said and the things that have not been said have either been left out deliberately or omitted through ignorance. The fact that this review has been written is evidence in itself that a lot has happened in the last 40 years and some people may find it interesting to read about it to give them a bit more in the way of background and perspective to their own current work. Critics are likely to complain that this is very much a personal recollection and that the account is not comprehensiveso be it, but I hope that it has given a flavour of developments in remote sensing over the last 40 years.