Defining sustainable limits during and after intensification in a maritime agricultural ecosystem

ABSTRACT Introduction: Intensification of agricultural ecosystems in the 20th century proceeded by a series of innovations that initially ensured food security, but had negative consequences for the in-field and wider environments. A case study in the north Atlantic zone maritime cropland of the UK identified three phases: (1) reorganization between 1940 and 1960, (2) intensification leading to tripling of grain output (1960–1990), and (3) a leveling of output (1990–2015). Outcomes: Innovations that caused the changes together with their effect on life forms, ecological processes and the evolving social, economic and biophysical conditions are identified. Attempts to design future sustainable systems are hampered by the absence of a baseline before intensification, by inadequate knowledge of in-field processes, and by uncertainty over “safe ranges” in which processes can operate without causing long-term system decline. Safe ranges are examined for three attributes recorded over all three phases, namely grain output, fertilizer input and the wild seedbank flora. The lower limits for grain and nitrogen were quantified as those that ensure grain security. Tentative upper limits were identified as those that were economically acceptable to farming in the face of major external perturbations related to food policy and environmental protection. Within these upper limits, fields can maintain a generalist wild species seedbank that supports a farmland food web. Conclusion: Some properties of the ecosystem therefore attained stability in phase 3. However, evidence of degrading soil, increasing pesticide use to maintain yield and collapse of specialist plant functions such as nitrogen fixation and pollination, suggest the system is moving toward a phase of decline.


Introduction and concepts
Intensification of agricultural ecosystems has occurred in many parts of the world primarily due to the expansion of cropped area, the availability of industrially made nitrogen, development of more powerful machinery, chemical pest control and crop varieties allocating more plant mass to grain (Evans 1993;Hou et al. 2012;Cui et al. 2014;Neumann et al. 2010). While agricultural output increased, in many instances to erase the threat of famine, the associated high disturbance has had damaging effects on ecological processes within fields and more widely. Despite warnings of mismanagement, harmful practice sometimes continued unabated. For example, the threat of agriculturally-induced soil erosion was recognized in the US, then not heeded, giving rise to the dust bowl collapse of the 1930s and 1940s (Bennett 1935;Bennett and Chapline 1928). In the example of the dust bowl, the intensification cycle from promising beginning to stalling and collapse happened over only a few decades. Similarly, in western Europe, the most recent and most effective phase of intensification, building on two hundred years of systematic improvement, occurred over several decades, after the 1940s (Evans 1993). Where soils are inherently rich or fertilizer can be bought, modern technology has probably shortened to a similar time scale the rise of yield from subsistence to high productivity in parts of Asia (Cui et al. 2014).
In many regions where intensification at first increased agricultural output, the yield per unit area has since leveled, or sometimes declined, despite estimates that potential yield due to genetic improvement of crops has continued to rise (e.g., Dawe et al. 2000). A yield gap therefore exists between actual and potential yield (Neumann et al. 2010;Lobell, Cassmann, and Field 2009;Squire 2017). Given, however, calls for even further intensification, there is a need to take stock of what has been achieved, why the rise in yield has stalled in many areas and what ecological damage has been caused. The issues are not just local and biophysical. As argued by Hawes et al. (2016), the emphasis needs to be on holistic assessment of the economic, social and biophysical aspects of the whole agricultural ecosystem rather than on selected processes in isolation. For example, attention should be directed to both in-field supporting functions of the soil (Lal 1997;Syers 1997;Powlson et al. 2011) and the resulting ecological cascades and feedbacks between fields, landscapes and trading regions (MacDonald et al. 2016). Ultimately, therefore, a large number of sustainability indicators is necessary to quantify a complex socioecological system (Firbank et al. 2013;Yang et al. 2015;Huber et al. 2015).
Even when an appropriate set of indicators has been chosen, there are still generic difficulties with "taking stock." First is the absence of a baseline before intensification, since no need was perceived in the early 20th century to take comprehensive measurements for future reference. International efforts to assess impacts of 20th century intensification and industrialization generally came too late to determine the baseline. For example, the International Geosphere-Biosphere Programme (IGBP 1987) and the Global Terrestrial Observing System (GTOS 1996) did not begin until the later decades of the 20th century. In a region of developed agriculture such as the UK, where the work for this paper is based, preparation for intensification began in the 1940s and 1950s, yet with few exceptions monitoring of various life forms and biophysical attributes did not begin until the 1960s or later, when intensification was well under way, and even then the intervals between surveys were sometime too long to capture change caused by the rapid developments in agriculture (e.g., Preston, Pearman, and Dines 2002;Kernan et al. 2010, Carey et al. 2008, ECN 1992. Prior conditions are therefore difficult or impossible to define for those regions that have already undergone intensification. Moreover, where intensification has occurred, it has generally been pervasive such that few pre-intensification comparators remain in mainstream agriculture (Squire et al. 2015).
The second difficulty lies in the uncertainty of what is ecologically or environmentally "safe," for example in terms of rate of an ecological processan uncertainty that arises in part through a disconnection between the in-field scale at which the initial impacts of agriculture operate and the broader scale at which environmental impacts of intensification are usually measured. The principle of managing systems through "safe limits" is already widely accepted, for example, when setting critical loads of pollutants and acceptable ecological and chemical status of water (EC 2000;EU Water Framework Directive 2000;Kernan et al. 2010), maximum residue limits and acceptable daily intake of pesticides in food (WHO 1987) and the economic labeling threshold for GM presence in non-GM harvest and food products (Messean et al. 2009). Such limits, however, tend to be set and monitored well 'downstream' of the original processes within fields.
These challenges in "taking stock"lack of baseline, uncertainty of in-field status and safe limitsare here examined by reference to an agricultural ecosystem that was reorganized in the 1940s and 1950s, underwent increasing intensification between the 1960s and the late 1980s, then entered a period after 1990 in which yields leveled (Squire et al. 2015). In terms used in the Millenium Ecosystem Assessment (MA 2005;Yang et al. 2015), indirect drivers of the change were mainly economic and sociopolitical, whereas direct drivers were external input (e.g., fertilizer) and technology (e.g., crop improvement and powerful machinery). To enable consistent analysis, intensification is interpreted through a scheme ( Figure 1) in which interventions in the form of Figure 1. Schematic diagram of the chain from the pedoclimatic context of an agro-ecosystem through human interventions, the biota and ecological processes to higher levels states defined by ecosystem services and global Sustainable Development Goals. Measured indicators of intensification in the case study are listed in Tables 1 and 2. agronomy and crop varieties work on basal states to initiate a chain of effect through the abundance and activity of "life forms," such as plants, invertebrates, and microbes, which themselves mediate ecological processes such as primary production, nitrogen cycling, and soil formation. The rates of these processes determine whether higher-level outputs or ecosystem services are met, and at a higher scale still, whether production ecosystems combine with urban and other rural activities to influence national and continental indicators such as the globally recognized 2015 Sustainable Development Goals (SDG 2015).

Methods, sources, and framework for analysis
Following an Agricultural Expansion Programme (DAS 1952), modern agriculture in the region developed in three main phases, each of around 25 years (1) phase Irealization, reconstruction, and the beginnings of intensification from the 1940s to around 1960, (2) phase IIthe main phase of intensification from the early 1960s to the late 1980s, and (3) phase IIIleveling, adjustment, and further selective intensification after 1990. The main agronomic interventions responsible for the transitions that led to the increased yield, together with their effects on life forms, ecological processes and ecosystem services are summarized in Table 1. The effect of these transitions on the status of ecosystem services and appropriate Sustainable Development Goals (SDG 2015) during the three phases of intensification is documented in Table 2. Of the 17 current SDGs, 6 are considered to be relevant to agricultural activity in the region under study: No poverty (SDG1), Zero hunger (2), Decent work and economic growth (8), Sustainable cities and communities (11), Climate action (13) and Life on Land (15). (Other SDGs, such as Quality education (4) and Peace, justice and strong institutions (16) would need to be achieved by activities other than agricultural intensification.) The extent to which the SDGs have been be achieved (retrospectively in phases I and II) and their current status in phase III are summarized in Table 2 and related text.

Sources of data
The arable region chosen is one of historic, now advanced, agriculture in a temperate climate between latitudes 55.5 o and 58.5 o N and longitudes 2.0 o and 4.5 o W, mostly within 35 km of the coast in the agricultural census region of east Scotland. In the census 1940-1949(DAS 1949DAS 1952), arable land averaged 13,275 km 2 consisting of 7833 km 2 (58%) "crops and fallow" and 5443 km 2 "temporary grassland." The estimate of arable land in an aerial photographic survey in 1947 of 7,745 km 2 in 1947 (Mackey, Shewry, and Tudor 1998) is close to area classed as crops and fallow in the government census of the same year. Over the next 70 years, the areas of arable and temporary grassland have shown minor change as land shifted variously between "crops and fallow" and "temporary grassland." Now its provisioning outputs consist of grain and other arable crops, grass for grazing and cutting, wood extraction and miscellaneous other products (ERSA 2015). Soils are adequate and the climate mild and moist enough to support the grain yields that are high for the UK, typically 5-6 t ha −1 for spring sown and 7-9 t ha −1 for autumn sown barley and wheat (ERSA 2015;Squire et al. 2015). Supporting services include the maintenance of a healthy soil, of botanical diversity in its ruderal flora, which is distinct from that in the surrounding uplands, and of food webs that satisfy a range of processes including comminution of plant litter, pest biocontrol, and pollination, all contributing to carbon and nitrogen cycling. Regulating services include a capacity to hold and channel rain water and outflows from higher regions and to limit soil erosion and pollution from agricultural activity. Cultural services include employment, esthetic and recreational activity for nearby population centers and iconic plants, insects, and higher animals (State of Nature Report 2016). Baseline environmental data were rarely recorded in the 1940s and 1950s. Table 2 gives references and links to systematic records of plants, invertebrates and birds from the 1960s and general land use and environmental change from the 1990s.

Agricultural inputs and outputs
Data on agricultural inputs and outputs comprise land area occupied, numbers of stock animals, and yield of the main crops (ERSA 2015; and all previous versions of this annual account back to DAS 1949). Total grain output is derived in this paper by summing the output for oats, barley, and wheat. For comparing grain output with energy needs of the population, the calorific value of cereal meal is taken to be on average 4.3 kcal g −1 , the daily energy requirement of a moderately active person is 2500 kcal (EFSA 2013), equivalent to 580 kg grain annually. Population is as detailed in the decadal national census. Systematic records of fertilizer applications are available as total usage from the 1960s and various "unit area" measures from the 1970s (Fertiliser Practice 2016 and all previous annual reports in this series), augmented by some regional estimates (Domburg et al. 1998), and earlier records (Church and Lewis 1977). Pesticides have been recorded since 1974 for arable crops, less frequently for grass (Monie, Reay, and Wardlaw 2014, and all previous reports in the series "Pesticide Usage in Scotland"). Here, the general indicator used is the number of pesticide formulations applied on average to a unit area of land, which is estimated by summing the areas which received the different pesticides and dividing by the area of the crop.

Soil and in-field life forms
The soils of the region are recorded in standard soil survey, augmented by specific sampling for organic matter and susceptibility to erosion (Dobbie, Bruneau, and Towers 2011). The biophysical state of arable soils was assessed in terms of attributes such as soil carbon, bulk density, water holding capacity, and porosity (Valentine et al. 2012). However, there are no comparable biophysical surveys in the period back to the phase I. The functional links between cropping intensity and soil condition are not fully quantified, but the crop sequences of highest intensity occur in fields of <2.2% soil carbon content and those of moderate-to-low intensity >3% (Squire et al. 2015).
Life forms within fields have not been systematically measured over the period. Records from before phase I exist of the weeds of farmland (Brenchley 1920;Long 1929), but not in terms of explicit abundance or distribution. Botanical surveys have included plants of farmland, but generally in terms of presence or absence in large grids (e.g., 10×10 km), but not abundance. The first botanical survey published in 1962 (Perring and Walters 1962) was not repeated in full until the 1990s (Preston, Pearman, and Dines 2002). The only indicator of in-field life forms that has been measured in terms of presence of taxa and abundance over much of the 20th Century is the soil seedbank, which is a universal feature of disturbed arable land (Squire, Rodger, and Wright  Higher yield potential due to increase in soil available for roots > Greater seed dormancy due to more being buried at depth> Tillageweight of machinery and traffic ≫ Crops, weeds, soil organisms Soil compaction, bulk density, loss of air space(?) Self-limitation to crop production due to soil deterioration >, Soil erosion > The starting condition is low-input spring crops, mainly oat, with grass. Symbols: > and <, increase or decrease by up to a factor of 2; >? by a factor of around 2 but uncertain; ≫ or ≪ by more than a factor of 2; or uncertain due to lack of data (?). General sources: government reports and yearbooks from ERSA (2015) back to (DAS 1949;Fertiliser Practice 2016) and all annual reports back to 1940s; Pesticide Use Surveys (e.g., Monie, Reay, and Wardlaw 2014) back to 1974; more specific sources are given in the text and Table 2. Fertiliser Practice 2016, and previous as above; Domburg et al. 1998 Pesticide (number of formulations applied per unit field area): mean load over a cropping sequence About 1 application per unit crop area per year 2 rising to 10 in high input cropping (5 in medium input) Medium input cereal 4-6, high input cereal 10-12, potato crop 20-25  2000; Squire et al. 2003;Hawes et al. 2005). Seedbank records began in the UK before phase I and continued, though unsystematically, through to phase III. The species present and their broad change in abundance over time are largely consistent among different parts of UK arable land (Hawes et al. 2005). The seeds themselves provide food for invertebrates, mammals, and birds and return organic matter to the soil when they die, but seedbank species emerge in the crop and take a portion of resource which they use to grow and in this form they mediate other ecological processes ). The relation between seedbank and emerged plants abundance is not direct, being influenced by field management, but if species are in low abundance in the seedbank they are unlikely to emerge in sufficient abundance to mediate a process. Of the studies summarized in Hawes et al. (2005) and Squire et al. (2003), the following are chosen as representative for the periods (based on UK data as a whole) and present sufficient quantitative information to enable broad change in abundance of species and functional types to be illustrated: Milton (1943) and Roberts (1958) for phase I, Roberts and Stokes (1966), Roberts and Chancellor (1986) and Warwick (1984) for phase II and Heard et al. (2003), Debeljak et al. (2008) and Hawes et al. (2010) for phase III. The seedbank counts are expressed as germinated seed per unit field area, standardized to a depth of 0.2 m. Seedbanks are examined in broad categories of monocotyledonous (grass) species, e.g., Poa annua, P. trivialis and Alopecurus myosuroides; dicotyledonous or broadleaf species, e.g., Capsella bursa-pastoris, Stellaria media, Viola arvensis, and Myosotis arvensis; and several specific categories including poisons and irritants, e.g., Euphorbia exigua, Senecio jacobaea, Sinapis arvensis (Long 1929); nitrogen fixing legumes, mainly of the genera Vicia, Medicago and Trifolium; groups supporting pollinators, e.g., the genera Lamium, Mentha, Galeopsis of the Lamiaceae and several of the Asteraceae; and iconic species of cultural significance, e.g., Centaurea cyanus, Chrysanthemum segetum, and Scandix pecten-veneris.

Framework for analysis
Indicators of high-level outputs and states, including ecosystem services and SDGs, in each of the three phases of intensification are given in Table 2. Of the attributes of the agro-ecosystem, only those relating to crop production (area, yield, total output) and fertilizer input have been recorded over most of the period. Of biophysical attributes, only the in-field seedbank has been recorded in all three phases; soil biophysical condition was not assessed in detail until phase III. Sources cited above and in Table 2 are now examined to quantify the changes in grain output (e.g., ERSA 2015), nitrogen input (e.g., Fertiliser Practice 2016) and the in-field weed seedbank community (e.g., Hawes et al. 2005;Squire et al. 2003) in an attempt to define sustainable or "safe" limits for in-field processes. Here, "safe limits" refers to the state of a provisioning system that supports a population and an economic farming industry sustainably into the future in a way that does not degrade soil and other supporting functions or impact adversely on the wider environment (Table 1 and 2). A potential scheme for defining limits is illustrated diagrammatically in Figure 2, which shows a hypothetical process changing over time. The solid central line indicates its mean, while a pair of inner limits (A1 and A2) define the range A in which a process may vary indefinitely or sustainably, and a pair of outer limits (B1, B2, range B) within which it is able to operate at all. If the population or process strays outside A, "work" in the form of interventions is needed to bring it back within A. If the population or process moves outside B, it collapses and ceases, at least until it is rebuilt or restored. Not all processes require two sets of outer limits: for some there is simply a range in which the process may operate indefinitely (A), another where it operates sub-maximally and a third where it collapses. In such instances, the total range is restricted to <B1, B1 to A1 and A. The analysis in Figure 2 is now attempted for those attributes identified above as having coverage in all three phases. In each case, the available data are examined with a view to defining ranges A and B and the current status of the variables in terms of those ranges.

Results: safe limits, sensitivity, and resilience
Response variablegrain production In phase I, before intensification, the total grain output from the region was 1000-1200 kt and mean cereal yield 2-3 t ha −1 . During phase II, output increased by a factor of around three to a maximum of 3050 kt (Figure 3) due to the innovations identified (Table 1, Table 2). The main change in the biota (Figure 2) was initially through crops, with the replacement of oat as the dominant crop (13-fold fall in area between 1945-1955 and 1985-1995, not shown) by first the potentially higher yielding barley and later wheat, a shift to grain from grass (1. 27 increase, 1960-1985), and then a rise in yield per unit area in all crops (1.72 between 1960 and 1975) through intensification mechanisms (interventions in Figure 1; Table 1). By the late 1980s, however, the rise in yield per unit area had lessened and in phase III from the mid-1990s the total output had leveled.
The extent of range B, in this case is defined as the range of output in which grain can support a human population (even if part of it goes hungry) must be larger than the range delimited by the traces in Figure 3 since the region has existed in previous centuries on less output than this and intensification has not caused collapse of the ability of the system to produce grain. The lower limit A1 (grain) could be assigned on the basis of satisfying provisioning services as the minimum that would provide the food energy requirements of the population. Assuming energy needs described in Materials and Methods then grain security would be around 1400 kt, which was achieved in the early 1960s, a few years after the start of phase II (A1 line on Figure 3(a)). In reality, the output of grain at this time also had to contribute to the upkeep of a large number of cattle and other farm animals. Nevertheless, 1400 kt provides an indication of the limit A1 at which agriculture could have provided staple grain for the existing population. The corresponding value for average yield per unit area across oat, barley, and wheat and weighted by their respective areas was 3.4 t ha −1 (A1 line on Figure 3(b)) Criteria for setting the upper limit of A2 (grain) are problematic since the purpose of agriculture changed from one aiming to produce enough food in the early part of phase II (DAS 1949) to one producing economic outputs in a global market, namely non-food products and exports for industrial (mainly alcohol) feedstocks and animal feed (ERSA 2015). At no time has the limit A2 (grain) been defined on scientific grounds as a grain output that can be sustained without self-limitation due to the effect of intensification on soil and biota. Rather, several, mostly external factors limited the further rise of output in and after the early 1990s. First, overproduction was considered to have been reached in much of Europe in the early 1990s, when the policy of set aside was introduced in which some grain land was taken out of production (EEC 1988). The marked drop in total grain output during the first two years of set aside is indicated by the letter a in Figure 3(a). Second, years of wet, cloudy weather depressed output, as indicated by the letter b in Figure 3(a). While yields recovered in subsequent years, the depressed yield indicates the potential of a pedoclimatic limit due to the propensity of soils to become waterlogged. Third, agronomic limitations include restrictions on the use of N fertilizer due to EU policy (see next section) and an increasing rise in pesticide during phase III (Table 2), indicating that pests were repeatedly overcoming methods to control them. Fourth, degrading soil properties (see Materials and Methods) may be exerting a further limit, notably at sites dominated by the most intense cereal, winter wheat and also high-input potato and vegetables (Squire et al. 2015).
Excluding the effects of set aside and very wet weather, mean production fluctuated around 2500-3000 kt or 6.4 t ha −1 (average grain yield) which offers one possible and broad workable definition of A2 (grain). A slightly lower limit can be identified if the high-intensity sites found to be associated with degrading soil (Squire et al. 2015) were to be occupied with lower intensity crops. The high-intensity sites occupied about 50% of the winter wheat area, which itself occupied 23% of the total grain area in phase III. The mean yield of winter wheat was higher than the average cereal yield (Figure 3(b)), but replacing these sites with others of average intensity would only reduce grain output after year 2000 by a factor of 0.96, giving a slightly lower of A2 (grain) of 2710 kt total output and 6.2 t ha −1 mean grain yield. The basis of these working definitions of A2 (grain) seems to be what farming will tolerate or adapt to in response to a variety of external and internal factors. Moreover, this limit is not one desired by growers: in a global market, much of the cereal production here is barely profitable at these levels (ERSA 2015).

Response variablesthe example of nitrogen
Nitrogen content in crops is stoichiometrically coupled to plant mass, since most crops require 1-2% N to function and to produce usable yield. Limits are defined in relation to two of the main effects of nitrogenone as a necessary support to grain production, and the other as a pollutant. Nitrogen inputs, in total and per unit area, showed a trajectory broadly similar to grain yield (Figure 4). The range B for N applications must, as for yield, extend beyond the whole range of the trajectory in Figure 4, since production of grain and grass operated previously at lower inputs and the high levels experienced at the end of phase II, have not caused collapse of capacity to produce grain or of soils to support life. The lower limit of range A for nitrogen input to support production, A1 (N), could be defined as that needed to support the corresponding limit A1 (grain). Comparing Figures 3 and 4, A1(N) on this basis was around 60 kt (to all crops and grass) equivalent to 40 kg ha −1 to cereals. Defining the limit A2 (N) is again more challenging since mineral N used during intensification began to exert external damaging effects, for example on water quality. Nitrogen additions to both grain and grass accompanied the rise in yield throughout the period (Figure 2, Table 1), and peaked at the end of phase II at 190-200 kt or 120-150 kg ha −1 for each of arable crops (mostly cereals) and grass. Subsequent directives aimed to curb nitrates in water across Europe (EEC 1991;EC 2000) leading to reductions at letters a and c in Fig. 4(a), which together with set aside, letter b (see previous section), caused N inputs to cease rising and then to fall, albeit with fluctuations. Major reductions occurred in the quantity of N applied to manage grass (nitrogen in livestock diet being supplied from imported protein feed). In most cereal crops, N applications were constrained to a maximum and began to be withheld until after the winter, the crops growing initially on soil residual N (Defra 2007;Defra 2010). A global rise in the cost of N fertilizer in 2007 (letter d) resulted in a further fall followed by an increase a few years later when prices lowered. After each response to an external policy directive or other influence, N input returned to about 150 kt or 100-120 kg ha −1 for grain which may be a workable upper limit of A2 (N), defined a balance between reducing environmental impacts and what commercial agriculture would adapt to or tolerate.

Response variablefarmland seedbank flora
In phase I, seedbanks were typically above 10,000 m −2 (seed) and in some fields above 100,000 m −2 (Hawes et al. 2005). Many of the species, both grass and broadleaf, were considered competitive to crops, while poisonous and irritant species occupied as much as 35% of the total seedbank ( Figure 5). Among the specialist groups (see Methods), wild legumes were the next most prevalent ( Figure 5). The main trends in phase II were a general decrease in abundance, a decrease in the broadleaf or dicotyledonous (dicot) component from 80% to <50%, an increase in grass species abundance from 10% to >50% and major reductions both of poisons and legumes (Figures 5 and 6) and at the same time those groups supporting pollinators and of iconic and cultural status ( Figure 6). Defining limits against this background is more problematic than for grain and nitrogen, since there have been major changes over time in the agronomic context, specifically the ability to control the negative effects of the weed main groups. For example, a high weed abundance that limited grain output in phase I might not be  limiting in phase III due to the advanced capacity of agronomy to manage emerged weed populations.
The general broadleaf (dicot) group that was the most abundant in phase I has an upper range in which it reduces the crop and a lower range in which it fails to support food web organisms (Figure 6). High abundance in phase I and early phase II (e.g., 10,000 m −2 and above) were considered to be limiting yield (Brenchley 1920;Long 1929) and so were in the range A2-B2. An increase of chemical herbicides and more effective tillage during phase 2 reduced seedbanks to <10,000 m −2 and typically 3000-5000 m −2 (Hawes et al. 2005;Hawes et al. 2010). Such values were found by a major set of field experiments in phase III to impose no or very little restriction on cereal yield, provided chemical herbicides were available to control emerged weeds (Young et al. 2001;Squire, Rodger, and Wright 2000). Few studies have examined on a broad scale the links between seedbank, emerged weeds and the food web so as to define A1. Nevertheless, and despite the general dicot seedbank being depressed in phase II relatively more than the grass seedbank, evidence from UKwide field studies indicates that values of 2000-3000 m −2 were able to support an emerged dicot flora that in turn supported an active food web mediating a range of trophic functions, including decomposition, predation, and parasitism Heard et al. 2003). This flora emerged mostly in break crops such as oilseed rape and was suppressed by the higher herbicide usage in cereal crops. Range A, in which both functions can in principle be satisfied, appears therefore to be a seedbank of 2000-5000 m −2 consisting of a broad complement of species. Many fields in the survey by Hawes et al. (2010) lie in this range.
The grass species can be suppressive of yield but have less of a role in supporting the food web. In phase I, they were reported to be limiting yield in some instances, but were then brought within range A by chemical herbicides. Their increase in phase III could take them above A2, especially if herbicide resistance became prevalent as it is in some parts of the northern Europe. Whether grass species have been and are limiting yield therefore depends largely on the evolving contexts of herbicide resistance and chemical control ( Figure 6).
Of the specialist groups, the poisons and irritants also reduce the crop at high density and limit the food web at low density. No formal limits to comply with food safety appear to have been set during phase I, but in general in this region, poisonous plants have been reduced to <<1% of the seedbank and <100 m −2 by a combination of herbicides and rigorous seed cleaning, as part of crop certification procedures. In relation to limiting the quality of yield, poisonous species were in the range A2 to B2 in phase I, but are now well below A2 for this limiting property. However, their abundance became so low that it was below A1 for the function of supporting the food web. For broadleaf poisons and irritants, there is therefore no optimal range A where two functions can be satisfied at the same time: the need for very low populations to safeguard food and feed quality overrides any limitation of the food web.
The functional contributions of legume weeds and other potentially beneficial groups such as pollinators were not measured, or even generally acknowledged, in phase I, but they sometimes occurred in numbers that had the potential to limit yield as well as provide beneficial functions such as nitrogen fixation. However, they have been so depleted over time as to be below A1 and in some cases below B2, which means they are no longer able to support their positive functions and some species are locally extinct. All such types are now in the range A1-B1 or <B1 ( Figure 6).
The designation of main and specialist groups in relation to limits is therefore highly dependent on agronomic context, but is also influenced by overlap and redundancy in function. So the broadleaf poisonous and irritant species can be reduced with little effect on the general food web, provided a broad group of other dicot weeds remains in range A. On the contrary, the legume, cultural-iconic and pollinator species have (by definition) no counterparts in the broad dicot assemblage, so when reduced in abundance, their functions are also reduced or cease.

Discussion
The agronomic interventions, life forms, processes, and services examined are common to production ecosystems generally. In the region studied, ecosystem services and high-level outputs were achieved to very different degrees during the phases of intensification. The major positive effects on provisioning services ( Figure 1; Table 2), would by the mid-1960s have satisfied today's 2015 Sustainable Development Goal (SDG) of Zero hunger (SDG 2). In contrast, Life on land (SDG 15) has been compromised (Table 1) by decline and loss of many species, including those arable plants and their associated invertebrates that live mainly within disturbed agricultural land. Climate action (SDG 13) was initially compromised through substantial increase in nitrogen and phosphate fertilizer after the 1940s, but regulation and downward trends are evident for phosphate since the 1960s and for nitrogen after its peak in the 1990s (Figure 4). However, high N-usage for highintensity crops is inevitable due to the need for plant matter to contain nitrogen at relatively high concentration (1-2%). There is a need therefore to consider the degree to which the goals of Life on land and Climate action can be achieved in this ecosystem at the same time as having an economic agriculture.

Problems in defining A2 for grain output and other properties
Defining A1 in terms of grain output and ancillary properties was straightforwardthere was a single driving imperative. In and before the 1940s, agriculture was unable to support the population but by the middle of phase II it was. The main challenge in this paper was to define the upper limit A2, within which a diversity of processes might operate sustainably. There are two main problemsthe limit is governed by several factors; and it depends on the context, which itself has undergone change. The limit A2 (grain), and hence A2 (N), were identified provisionally by repeated return to a consistent value following perturbations caused variously by action to curb grain surpluses in the EU, by a need to reduce the polluting effect of nitrate in water and by a global rise in the cost of fertilizer. The implication is that farming adjusted after each event to a grain output considered still acceptable for commercial survival. Factors internal to the field also have a role in setting a purely biophysical limit to A2. The degradation of soil properties on high-intensity sites (Valentine et al. 2012) may itself limit yield (Squire et al. 2015). The limit A2 proposed here (in Figures 3 and 4) would need to be revised, however, pending evidence of further change in soil properties or longer-term effects of nitrate and agrochemical pollutants in water courses (ECN 1992;Carey et al. 2008). The limit of A2 as defined does not require that all fields lie below the limit. As is the case now, some fields can continue to yield well above and some below the limit. While some flexibility will exist in reducing N inputs further (Defra 2007), the stoichiometry of dry matter to nitrogen will limit the movement of one independently of the other.
The limits A2 for grain and N are also shown here to be achievable with general arable seedbanks below about 5000 m −2 , provided current options for control remain and that poisons or irritant species, and also highly competitive grass species, are kept at nonlimiting abundance. That grain production can persist at the designated A2 (seedbank) therefore relies on the weed management having enough effective control measures. In the late 1980s and early 1990s, such measures were available to control seedbanks even up to 10,000 m −2 (Young et al. 2001) but the evolution of herbicide resistance in grass weeds and the withdrawal of pesticides may exacerbate the current problems with the shift to grasses. Moreover, the dicot seedbank is shown to be able to support general invertebrate functional groups down to a seedbank of around 2000 m −2 . Such a seedbank allows emergence of mainly noncompetitive weed species, typically 20-30 species in a field, in sufficient number to support the food web, but only in years when break crops such as oilseed rape are grown (Squire, Rodger, and Wright 2000;Hawes et al. 2010). If such break crops were not present, then a seedbank of this size and composition would have little opportunity to support the invertebrate functional groups, and over time the seedbank itself would decline further. In principle therefore, a coexistence can be maintained between economic output and broad scale in-field biodiversity.
Has decline of in-field function been detected?
The question needs to be resolved as to whether intensification has begun a self-limitation of yield and other in-field functions. Soils in the region are largely stable and relatively high in organic matter (Dobbie, Bruneau, and Towers 2011), such that a collapse of the type shown in the US dust bowl (see Introduction) is unlikely. The more pertinent question is whether the system has degraded to a degree that would lead to a fall in rather than maintenance of grain output. The evidence is not clear cut. On the basis of yield trends, the capacity of fields to support grain output has not declined in the 25 years since phase III began (Figure 3). Moreover, nitrogen use efficiency, derived from Figures 3(b) and 4(b) as total grain yield per hectare divided by N input per hectare, has, despite large variation to the factors referred to, increased systematically by 0.46 kg kg −1 since 1980, at least in part due to new varieties being introduced with increased use efficiency (Bingham et al. 2012). Agronomic output as a whole therefore has not shown any decline, but neither has it shown much improvement.
There are however early warning signs that in-field supporting and regulating functions of the ecosystem may fail. A first indication comes from an unintended response or lack of response to agronomic interventions. The major weed shift toward grasses ( Figure 5) is an unintended, negative response, whose causes are still uncertain. It appears to have begun in phase II, but even near the end of phase II in the study region, dicotyledonous weeds were still more abundant overall, even when a single grass species was the most frequent and abundant (Warwick 1984). Yet by 2000-2010 grasses, mainly Poa annua and volunteer cereals in this region, were dominant and occupied half the seedbank ( Figure 5). This shift occurred despite a 2.6-fold rise in the use of herbicide, from 1.01 herbicide formulations applied per unit area of grain crop in 1974 to 2.62 in 2014 (data in Monie, Reay, and Wardlaw 2014 and previous Pesticide Use Surveys). Over the same period, all pesticides showed a 6.7-fold rise averaged across grain crops, while wheat alone showed a 10.8-fold rise.
A second type of indication consists of evidence of degradation in soil properties. Interventions of repeated, heavy tillage and agrochemical application reduce soil carbon and break fungal structures, leading to loss of soil, soil carbon, and plant nutrients in leaching and runoff and consequently diminution of soil as a medium for plant growth (Powlson et al. 2011). Well into phase III, a proportion of fields in the region were considered to be suboptimal for plant processes, as assessed by an ex situ root growth test (Valentine et al. 2012) and of these fields, those currently supporting crops with the highest inputs of pesticide and fertilizer appear to have lower soil carbon, higher bulk density and lower water holding capacity (Squire et al. 2015). If soil properties continue to degrade, then economic output should also fall.
The third type of indication is provided by the decline of those seedbank species providing supporting and cultural ecological functions. The general suppression of the whole seedbank population during phase II, was targeted mainly at grasses and abundant dicot species, and mediated through an increase in the number and kill range of chemical herbicides (Marshall et al. 2003) inevitably reduced specialists along with the rest ). The consequence depended on whether the functions of the specialist groups overlapped with those of the broadleaf generalists. Groups such as the poisons and irritants also have a role in supporting the food web, but that role appears to have been accommodated by the general broadleaf group. In contrast, nitrogen-fixing legumes, taxa that support pollinators and the iconic flora of cultural significance have no counterparts among the generalists and have been reduced toward and sometime below A1 by an agronomic management that was not targeted specifically at them. They have been designated as among the most threatened group of plants in the UK (Preston, Pearman, and Dines 2002). Current approaches to management of wild arable plants are unsustainable, both in their enhancement of grass weeds and in their suppression of specific functions.

Research needs to define the intensification cycle
The three phases of intensification quantified here have been followed in the development of many agricultural ecosystems in the previous century, yet few other studies have had access to data going back over most of this period. Broad-scale comparison of intensification in regions of the earth typically cover only a few recent decades, usually due to absence of systematic records (e.g., Rudel et al. 2009).Those systems that are still to go through intensification could therefore learn from the experience of the present study. A primary requirement is to establish a baseline before intensification in terms of attributes of soil condition, in-field flora and food webs, field structure and losses to the wider environment, and then to monitor these attributes during the course of intensification. Another is to retain parcels of land that can be used to track any shift in baseline conditions and that can be used as a comparator for intensified land. Without such baselines, it will be difficult if not impossible to determine in many instances whether changes in biophysical attributes are solely caused by intensification or whether the baseline has shifted due to other influences.
For all systems, but especially those that have gone through intensification without a fully measured baseline, the only recourse is to determine the limits of ranges A and B (Figure 2) for important attributes. And while the emphasis here has been on the biophysical, sustainability depends also on societal and economic attributes (Huber et al. 2015); and these varied attributes and the interactions between them should be defined by quantitative indicators, the more so than they are at present (Yang et al. 2015). Currently, the number of indicators deployed globally in sustainability studies rises into the thousands, and as argued by Huber et al. (2015), formal methods will be needed to derive a minimum set for the purpose under study.
An appropriate set of sustainability indicators will be particularly important in systems such as described here for providing an advanced warning of phase IV, decline and collapse; and again, the coverage will need to be extended well beyond the biophysical. Evidence through the scheme in Figure 1 requires a link to be established between an intervention, a life form, an ecological process and a higher-level service or output. Yet even for most of the biophysical attributes considered here, the limits are not yet fully quantified, even if the links may be understood in principle. This is so despite the agricultural ecosystem described here being very well documented compared to many others. Major effort is therefore needed in defining ecological, and socioecological, safe limits.