Tag: Environmental

  • Reconnecting… Considering Digital Environments in Narratives of British Childhood Decline 1977-2010CE

    Update Jan 2026: This article is 5 years old as I am publishing it, and since it was written I feel I have come a long way in my understanding of the topic. It is, however, an important precursor to my doctoral thesis, the North East Environments of Childhood Project, and as such I think it is worth preserving.

    We could never have loved the earth so well,

    if we had had no childhood in it.

    – George Eliot, ‘The Mill on the Floss’.

    INTRODUCTION

    Play Across Time

    Across cultures and continents our archaeological records teach us of children at play. In 2017 in southern Siberia archaeologists uncovered a rare collection of dolls and figurines dated to c. 2,500BCE, buried in graves alongside their childhood owners. In 1,000BCE, Ancient Egyptian children played with spinning tops, bouncy balls, and dice. In 200CE, Mesoamerican children played with wheeled toys, especially noteworthy as the wheel was only ever designed for the purposes of play in in pre-Columbian Mesoamerican societies, never transport. Alongside and earlier than any of these, natural playthings such as sticks, stones, streams, and bones are tools of play with which almost any child in history will be familiar, and indeed other creatures such as dolphins and chimpanzees have also been observed to toy with such objects. In your own youth you will undoubtedly have played with objects like these, and took part in other ancient pastimes such as hide and seek, tag, hopscotch, and leapfrog, games passed down the centuries by childhood oral traditions. In the 21st century, however, there is growing debate and concern surrounding the survivability of these millennia-old entertainments, the very concept of childhood itself has been said to be under threat. Calls arise for a ‘much needed child-saving movement’, but from where have they sprung? What threatens these time-honoured traditions?

    ‘kids used to play all day outside, ride bicycles, play sports, and build forts’, so affirms the opening lines of Digital Childhood: The Impact of Using Digital Technology on Children’s Health, a 2019 report written for the International Journal of Pharmaceutical Research & Allied Science (IJPRAS). The idea of what kids ‘used’ to do is operative in this context, a framing that places all young people under the roof of the same broad church. Another article for an educational advice group, Consequences of the New Digital Childhood, reads: ‘Think back on your childhood, and you probably remember hours spent in active, imaginative, and outdoor play. It’s likely that you rough-housed with friends or neighbours, engaged with nature, and built entire worlds from sheer imagination’. Sentiments such as these are frequently expressed in both contemporary academic and journalistic writing, often accompanied with an argument that “kids these days”, as Pamela Riney-Kehrberg’s The Nature of Childhood asserts, waste their energy ‘watching television, or looking at their computers, cell phones and video games’. ‘Whatever happened to ‘go outside and play’?’, decries a 2017 CNN article. In reading contemporary writing on children and their environments there is one inescapable and united conclusion, that today’s young spend too much time in the digital world and too little time outdoors engaging with nature and physical play. However, division quickly surfaces when it comes to questions about the origins of this situation, and how it is best addressed.

    Applying predominantly to what can loosely be called “western childhoods” several academics have sought to put a name to this phenomenon. Ian Rotherham called it ‘Cultural Severance’, the sectioning of an essential natural element of humanity, James Wandersee and Elisabeth Schussler coined ‘Plant Blindness’, however it has been Richard Louv’s ‘Nature Deficit Disorder’ (NDD) that has proved the commonly adopted term. Though not recognised as an official psychological ‘disorder’, NDD has gained traction with a wide variety of UK groups such as The National Trust and The Council For Learning Outside the Classroom as well as being a foundational principle behind The Children and Nature Network. During the 2010s it was also picked up by the majority of the country’s major news distributors from across the political spectrum including the BBC, The Guardian, The Daily Mail, The Telegraph, and The Sunday Times. NDD has been associated with increases in ‘obesity, diabetes, autism, coordination disorder, developmental abnormalities, speech, learning difficulties, sensory disorder, anxiety, depression, and sleep disorders’ as well as ‘risky sexual behaviours, drug use, poor academic performance, and aggression’. Nicholas Carr’s The Shallows takes an even more serious view, arguing that prolonged time spent in the digital world, particularly for youthful minds, is degrading the very quality of human thought itself. As Carr puts it: ‘What we’re experiencing is, in a metaphorical sense, a reversal of the early trajectory of civilization… we seem fated to sacrifice much of what makes our minds so interesting’. The issue unsurprisingly plays into common parental fears over safety, health, and freedom and its resolution is regarded by many not only as a practical but moral imperative; the solution of course being, as Louv defined it, for children to be ‘reunited with the rest of nature’.

    Figure 1. A satirical cartoon illustrating parental fears over NDD.

    In Britain, as digital technologies continue to pervade aspects of childhood from the classroom to the living room, so too has scepticism and criticism arisen of the ways in which these new virtual environments are influencing the culture and quality of youth. Across the press, academia, and popular non-fiction writing there is a growing literature, some of it more declarative than others, that warns of digital dangers, particularly for the ‘malleable’ minds of children. From historical, educational, biological, psychological, geographical, and anthropological perspectives such works are making parents and educators more aware than they have ever been of the relationship between environmental factors and the happiness and healthiness of their children. However, popular conceptions of where digital cultures of childhood originated, what they constitute of as virtual environments, and how they have evolved over time, are generally dualistic notions that portray childhood as a “now versus then” phenomenon. Far too often a conclusion is reached that only reflects one aspect of the reality behind the development of late-20th and early-21st century childhoods in Britain in the search for something to blame as the cause of NDD, cultural severance, plant blindness, or whichever term is favoured for the separation of Britain’s children from its landscapes.

    Technology itself is often assigned the blame for cajoling children away from nature, designed to be addictive and to constantly require attention;the finger is also pointed at unnecessarily protective parents who confine their children indoors, and at children themselves who choose the virtual over the physical. On the opposite end of the scale a form of technocratic utopianism, the ‘silicon valley way’, is a less common but still influentially adopted stance that hails a digital panacea for childhood’s ills; this approach conceptualises of a problem like NDD as ‘merely a physical Earth problem, and not an ethical one’. Either way, both perspectives take the view that modern childhood needs a “fix”. As David Buckingham observed in 2015, the discussion has become ‘marked by a kind of schizophrenia that often accompanies the advent of new cultural forms. If we look back to the early days of the cinema, or indeed to the invention of the printing press, it is possible to identify a similar mixture of hopes and fears’. Indeed looking much further back, as Carr notes, Socrates voiced fears that reliance on the written word as a substitute for personal memory would reduce the ability of the human mind. Anti-digital rhetoric often leans toward the nostalgic and romantic, with quotes such as Roald Dahl’s plea from Charlie and the Chocolate Factory still adorning many a café-bookshop wall:

    So please, oh please, we beg, we pray,

    Go throw your TV set away,

    And in its place you can install

    A lovely bookshelf on the wall.’

    Discussions and analysis of the ways in which the technological advances of the modern era have impacted children in Britain are certainly valuable, but the present discourse in large part lacks a nuance founded on a complex understanding of the multiplicity of childhoods, children as ‘dynamically configured, diverse and entangled assemblages of natural, cultural and technological elements’.28 The digital world does not stand apart from factors of gender, race, class, region, and ability; when we speak of what children “used” to do, should we not be asking which children? Furthermore, virtual environments themselves must be understood in multiplicity and analysed for their specific qualities, not only as general “devices” and “screens” so often explored only in concept, merely antitheses to popular conceptions of what constituted pre-digital childhoods.

    Methodology and Historiography

    The focus of this study is consequently upon the transitionary period during which digital technologies became adopted and then prevalent facets of childhood in Britain, beginning its analysis in 1977CE, with the widespread availability of personal computers, and finishing in 2010CE, with the introduction of the Digital Economy Act. The primary route of analysis will be through documents produced throughout this period from academic and government sources that undertook to study and report upon childhood and the digital environment. From an environmental and child-focussed perspective these sources will be used to explore the changing face of childhoods during this period as well as to study how those changes were implemented and interpreted by wider British society in a rapidly evolving environment. Part 1 will contextualise the discussion and explain the already contentious debates that were surrounding the nature of children’s play and education before the introduction of technological factors. Part 2 will continue chronologically, assessing the environments surrounding British childhoods in the late 20th and early 21st centuries that pushed them towards digitisation, and how those technologies both facilitated and impeded that transition. Finally, this study will conclude that the stereotype of children choosing to spend their time indoors, inveigled by flashing screens, does not take into account wider structural and societal changes taking place in Britain at the time and characterises a very diverse category of people as a monolith. Even more so than their elders, a child’s individual life is subject to a set of intricate, interconnected systems of economy, culture, and environment over which they have little influence. Particular attention will be paid therefore to conceptualising the digital development of childhoods as a continuum in which there are no straightforward states of “before” and “after”.

    This study provides an alternative narrative to many of the existing histories of childhood and the digital environment during this period which predominantly fall into two categories, histories of childhood than do not consider the digital, and digital histories that do not consider children. Nerds 2.0.1, A History of the Internet and the Digital Future, On the Way to the Web, and Spam are all examples of prominent cyber histories in which childhood presence and influence is affectively considered naught, their furthest mention in relation to the development of technologies that can help children with disabilities or illness. That is not to say that these are not good publications which competently examine many aspects of digital history, and all were certainly useful in the writing of this study, but it must also be said that they neglect to consider children as the prominent users of new technologies which they are.

    From within the existing historiography on the history of childhood there have been a number of works that have examined children’s lives during this period such as Children and Their Urban Environment: Changing Worlds, A History of Children’s Play and Play Environments, and Children in the Anthropocene. These texts offer some excellent insights into many of the contemporary factors behind the movement of children indoors and some of the consequences of that movement. Particularly beneficial for this study has been the multitude of research projects on youth undertaken throughout the period that they point to and evaluate, offering valuable insight into the changing nature and aims of these studies over time. However, a general omission in these texts is the digital space. And one dangerous pitfall that much of this writing falls into is the vagueness so often applied in arguments that evoke the “before times”, construed as periods of de facto childhood liberty, happiness, and quality without being critically engaged with. These declensionist narratives of degradation and corruption are particularly prevalent when authors in the existing historiography frame their assessments around experiences of their own childhoods and ‘the freedom we had as kids’. Louv’s Last Child in the Woods puts that ‘baby boomers or older, enjoyed a kind of free, natural play that seems… like a quaint artifact’. Analysis directs itself towards the processes of change with the implicit understanding that this change is moving away from something that was previously good and little attention is paid towards identifying children as a diverse group whose lives have differed greatly across this period based on factors such as class, gender, and region. Furthermore, this contrivance assumes a past intimacy with nature that runs counter to ‘a long history of environmental degradation and disconnectedness’ starting well before the childhood of anybody alive today.

    The question must be asked: to what extent are our historical conceptions of childhood moulded by the experiences of middle-class academics whose criticisms of modern childhoods fall most pertinently on working class households who historically and contemporarily have had more obstacles between themselves and access to nature? Such a comment may appear wantonly critical but is only to ask us to deepen our conceptions of what digital childhoods can mean. Indeed, as Jennifer Ladino identifies, we cannot miss that the forms of eco-nostalgia presented in many texts are intentionally designed to be a ‘mechanism for social change, a model for ethical relationships, and a motivating force for social and environmental justice’. The intersection between digital environments and digital environmentalism. Whilst it is undoubtable that concerns should be raised over the changing nature of youth in Britain since the emergence of digital childhoods, and indeed this study raises many, it is also important to recognise and challenge arguments that present common notions of a timeless “golden” period of early life which only in the 21st century has come under threat. This is a trend also common outside of academia, what Peter Kahn named ‘environmental generational amnesia’.

    As William Neuman identified in 1991, research in this field must tread a fine line between a technologically determinist narratives that disregard social and cultural change and construct such ‘mythical objects of anxiety as the computer addict, the screen-zombie… the Nintendo-generation, the violent video fan, etc.’ and culturally determinist narratives that romantically assume, at least implicitly, that “our” children are too intelligent to be duped by the messaging of ‘consumer-culture capitalist economies’. A multiplicity of childhoods approach is a good method to avoid falling into either of these traps as it does not assume any individual outside force to be deterministic over a category of people’s lives. In future research however, it will be useful to employ new interdisciplinary methods that engage with the computational social sciences as well as explore emerging fields such as digital ethnography and other forms of digital social research. This will allow scholars to better interrogate relationships between children and the specificities of the programmed architectures of digital devices and environments.

    Two foundational texts on which this study has been built and seeks to build on are Matthew Thomson’s Lost Freedomand Sian Edwards’ Youth Movements, Citizenship and the English Countryside, both of which are important historical studies of the relationship between child and environment. Susan Danby’s Digital Childhoods: Technologies and Children’s Everyday Lives and Sonia Livingstone’s Young People and New Media have also been instrumental texts from outside of the historical field in their examinations of cyberspace on its own terms and their attempts to look beyond, as Danby says, ‘simplistic descriptions of digital technology somehow having inherent ‘effects’ or ‘impacts’’. Affrica Taylor’s article “Reconceptualising the ‘nature’ of childhood” was also very helpful in warning against portraying an ‘essentialised’ view of children and environment that has been used in the past as a means to oppress or exclude certain children from a society by branding them as “natural” creatures, apart from the rest of humanity. Still however, these texts offer little consideration of digital factors as a part of the ecosystems of children’s environments. Not all digital histories can or should take child-focussed or environmental approaches, and not all histories of childhood environments must consider the digital, but the extent to which the history of childhood cyberspace has been overlooked in favour of physical space, to this day, is surprising.

    As the history of the relationships between children, their environments, and the way the adult world has sought to shape them in Britain receives increasing attention as contemporary cultures struggle to come to terms with new digital normalities, ideas of a golden past are invoked frequently as models to which Britain should be returning. However, there are also strong arguments made to the contrary, as Lloyd DeMause provocatively wrote in 1974: ‘The history of childhood is a nightmare from which we have only recently begun to awaken. The further back in history one goes, the lower the level of child care, and the more likely children are to be killed, abandoned, beaten, terrorized, and sexually abused’. Is a loss of nature and freedom in childhood a necessary cost of increased protection and quality of life? Contemporary debates surrounding these issues are not inventions of the modern world and indeed are surprisingly reflective of far more pedigreed discussions; their development is the most recent stage in a long historical tradition of interrogating the relationship between environment and childhood. It is crucial, and interesting, therefore that we appreciate the longer view of the pre-digital history of British childhoods as both comparison and context within which to assess the changes they would start to face in the late 20th century which so fundamentally shifted the environmental landscape.

    Part 1

    KIDS THOSE DAYS

    The Longue Durée of Work and Play

    The British and wider European tradition of play and education is commonly traced to Classical Greece which produced a number of foundational theories on the relationship between these two activities. Plato wrote extensively on the topic of childhood pursuits, both in regard to advising parents on practical advice for child rearing, and in pushing for legislative changes surrounding children’s games. Plato taught that child’s play could moulded and diverted into ‘productive channels’, where games and toys could be used to identify in which skills children were most apt and then to prepare them for an adult occupation that utilised those skills. A child who played with blocks, for example, would be encouraged to become a builder and another who played with dolls to become a teacher. Aristotle also believed that what a child did in play was important to their growth, and that a person would become lazy and unproductive if they did not have an active childhood. Indeed, he argued that children should receive no formal education until they were at least 5 years old, as inhibiting play during early years would be detrimental their development. Many Greek children’s games are still recognisable today such as Heads or Tails, Blind Man’s Buff, and Kiss in the Ring. Quintilian, a famed Roman educator, continued this legacy, insisting that it was essential for parents and teachers to observe pupils’ play in order to recognize individuality in their temperaments and intellects.

    Study of medieval children has been characterised since 1962 by the French historian Philippe Ariès’ now infamous work Centuries of Childhood which studied paintings and diaries of the period across a span of 400 years. Controversially, Ariès concluded that during the medieval era in Europe the category of “child” did not exist as could be understood for periods prior and hence, youngsters being seen as ‘small-scale adults’. He argued that parents lost too many children to attach any form of unique significance to them and furthermore that they raised their children to believe that play led to idleness, truancy, and inattentiveness. However, several scholars have since criticised this approach, with the research of Nicholas Orme, Shulamith Shahar, and Sally Crawford rebuking the majority of Ariès’ arguments. Orme demonstrated evidence of adults providing tailored items of culture for their children such as toys, games, and books, also noting children’s tendency to rebel and spend time by themselves away from adult supervision creating their own games and cultures. Shahar pointed out that the medieval and renaissance periods equally enjoyed a proliferation of play with small objects such as marbles, balls, and dice and references to equipment to help with jumping, swinging, and balancing known as “merry totters” being not uncommon. Crawford, mirroring Plato, explained how girls and boys ‘extended their skills’ through play in relation to tasks they would later be required to perform when they were older; girls learning household tasks and boys the trades of their elders.

    In the 16th century Martin Luther espoused extensively for the reformation of education as well as religion, describing schools as ‘a hell and purgatory… in which with much flogging, trembling, anguish, and wretchedness, they learn nothing’. He proposed a universal state school system that would be compulsory for every child but would also be part of a wider education structure that would also include time for work at home, learning trades, and, of course, play. A century later the Czech theologian John Amos Comenius built upon Luther’s work and proposed the idea of the educational ladder for both boys and girls, a staged set of schools to cater for children as they grew, beginning with a ‘mother school’ (or nursery) for young children and progressing all the way through to university. He also advocated for using outdoor play as a technique for fostering the healthy development of mind and body, saying that ‘it is necessary to put the body in motion and allow the mind to rest’. Comenius’ teachings were exemplary of a strange consistency across the history of childhood theory and philosophy throughout the European tradition where there has been a tension between intellectuals arguing for greater emphasis to be placed on outdoor activity and play against educational systems that leant toward more sedentary pursuits. It was with the enlightenment however, particularly in the late 17th and early 18th centuries, that many of the specific educational dichotomies that are characteristic of contemporary discussion over digital childhoods today came into being.

    Figure 2. Ancient Greek statuette of girls playing knucklebones.

    Enlightenment Thought

    Rationalism and romanticism were two intellectual movements of the enlightenment that had a significant impact upon the development of approaches to children and environment. The rationalist perspective, characterised by the works of thinkers such as René Descartes, Immanuel Kant, and John Locke, is still very much influential on the official structures and regulations of modern British childhood. This stance framed the natural world as a force of reason, in some ways the very foundation of order and reason. On this basis, rationalists understood nature as fundamentally logical, and as such could be understood and controlled with the application of appropriate reason. This same approach was adopted towards children who, as natural creatures, were seen to be intrinsically governable. John Locke’s An Essay Concerning Human Understanding (1689) is famous and exemplary in this regard as he describes the child as being ‘tabula rasa’, a “blank slate” which need only be impressed upon. Locke’s observations of children at play led him to the concept of ‘educative play’, using playtime as a space for semi-structured tuition. He described how, in his experience, the children of wealthy families were done great harm by being showered with toys and gifts which only taught them ‘pride, vanity, and covetousness’; they did not learn to value what they had. In Locke’s view a child was much like any other natural organism in that they had to be carefully managed otherwise they would become wild. With the correct instruction, however, they could be formed into an instrument of reason.

    Where rationalists such as Locke sought to control a nature that was logical, romanticists sought to learn from a nature they saw as inscrutable. In large part the romantic phenomenon arose in response to the increasingly rationalist-informed industrialised landscapes of Europe that authors such as William Blake, Mary Shelley, Alfred Tennyson, and Jean-Jacques Rousseau found themselves surrounded by. Predominantly an artistic movement, romanticists forwarded emotional countercultural arguments in which children and the natural world were often construed as intertwined joint symbols of innocence and purity. Children were a part of the natural world in a way that adults were not and could not be. As Rousseau argued in On Education (1763), childhood should be defined by teaching that is ‘beyond our control’ and follows ‘the goal of nature’. Indeed, Rousseau saw no distinction between work and play for children. Games were the work of the young, and in them ‘the child brings everything: the cheerful interest, the charm of freedom, the bent of his mind, the extent of his knowledge’. He saw the work of rationalist tutors such as Locke as pedantic and damaging, filling children’s head with facts rather than skills.

    This romanticist connection is well demonstrated in the lines William Wordsworth’s “My Heart Leaps Up”, published in Poems, in Two Volumes in 1807.Therein Wordsworth marvels at the beauty of a rainbow, considering how it takes him back to a state of child-like joy, before concluding with this reflection:

    The Child is father of the Man;

    And I could wish my days to be

    Bound each to each by natural piety’

    Johann Pestalozzi and Friedrich Fröbel were heavily influenced by Rousseau’s writings and built much of his theoretical work into their practices. Like Rousseau, the core concept of their position was that the key to a good childhood education was in creating an environment, both physical and social, that fostered creativity and development. Pestalozzi also emphasized an aspect of Comenius’ teachings which said that true learning must take place through actions; that you ‘thought by thinking, not by appropriating the thoughts of others’. Fröbel, creator of the kindergarten, believed that educative programs should be moulded around the natural interests of the child and, once again, that constructive, enjoyable play was the best method by which to determine what an individual was intuitively inclined towards. He highlighted the keeping and cultivation of allotments and gardens as a particularly helpful pursuit, as these were spaces in which children could watch the fruits of their labour develop over time, and eventually harvest a reward that would be appropriate to the quality of their efforts. This idea that young children learn most effectively from being able to interact with and actively manipulate the materials of their environments was explored further in the 20th century with the works of Maria Montessori, John Dewey, and Jean Piaget.

    As with rationalism, romanticist ideas still inform greatly on contemporary British childhoods, although in a less structural format. Aside for children of kindergarten age where it can be reasonably argued that romanticism is a guiding structural principle, romantic notions of childhood exist outside of official practices and educative systems and more in the realm of popular conceptions and representations of childhood; in children’s books, television, and in the kinds of outdoor activities parents are encouraged to nurture their children with. However, whilst the two philosophies can appear at first blush to be opposed in a “nature versus nurture” form, they are ultimately two sides of the same coin. Both emerged during a period of European history where confidence in the traditional role of religion to act as a moral compass, spiritual guide, and force of reason, was being eroded. The natural world, and the child by association, was able to act as a unifying, present force that both movements drew upon as motivation and justification for their actions and beliefs. As Sara Maitland writes in Gossip from the Forest: ‘wilderness finds its complement and counterpart either in conceptions of childhood moral innocence or the child as tabula rasa’. Indeed, the founding theoretical framework at the heart of the turn-of-the-twentieth-century child saving movement was a fusion of both philosophies.

    Play Manufactured

    Life without play is a grinding, mechanical existence organized around doing the things necessary for survival’.

    In 1859 the very first purpose-built playground in the world was opened in Manchester, a prototype for a model that would see mass adoption in Britain in years to come. The rise of the motor car in the early 20th century accelerated this trend as city streets shifted in purpose away from acting as areas of public domain and towards avenues of mass transport. The provision of specialist playground environments and equipment for children was an urban creation, a key point of principle being to move children out of the streets (and other public areas) and into specialised environments of safety. During this same period Britain saw a boom in the commercial mass production of toys, and the birth of the toy shop as a common feature on British high streets. Where previously crafted toys had been the provision of the upper and middle classes, over the century they steadily “democratised”, becoming available to working class households, but also becoming more standardised and controlled. In majority and increasingly toys of the democratised market were designed for yard and indoor play. This would lead to the phenomenon of toy “trends”, and an ever-expanding library of choice for children and parents. However, as Natalie Canning explains in Play and Practice in the Early Years Foundation Stage, and as this study will go on to further demonstrate, there are many developmental benefits that unpredictable, unstructured, informal environments of play can offer a child over domesticated ones.

    This shift towards pushing British youth towards more “guarded” forms of play was campaigned for by a number of organisations, namely the various Societies for the Prevention of Cruelty to Children (SPCCs) that were set up throughout the country’s cities during this period, the first being in Liverpool in 1883 before the national organisation coalesced in 1889. This was the birth of the child saving movement, primarily designed as Anthony Platt describes, as a romanticist conservativemovementthat cherished children but also controlled them. It was during this period that aspects of youth that had previously been dealt with informally became categorised and increasingly thought of as distinct identifiable phenomena; phenomena that could be addressed, controlled, and even eliminated. To name something is to ascribe it identifiable characteristics and members of the child saving movement were, in a sense, in the business of “inventing” new categories of youth behaviour, particularly misbehaviour, as exemplified with the creation of the juvenile court system during this time.

    Figure 3. Print of an early 20thcentury toy shop.

    This increased attention paid to children and childhood came with substantial benefits for many young people who previously had been overlooked by powers of authority. Those neglected, bullied, and beaten, those with dependencies, and those considered “delinquent” were now more likely to receive official support. In America, where the child saving movement was developing roughly in parallel with Britain, Baronet Charles Chute called the juvenile court system ‘one of the greatest advances in child welfare that has ever occurred’. At the same time however, the same system necessarily resulted in expanded restrictions surrounding where all children were allowed or supposed to go and what they should do as safety became a mainstream political concern. Exemplary of this new attitude sweeping Europe and North America was a precedent-setting 1915 legal case in Tacoma City, Washington where the parents of a boy who was injured falling from a swing successfully sued the school board for financial compensation. Following that lawsuit, playgrounds across Washington state and indeed America were taken down in fear of prosecution.

    After the first world war there was a brief flourishing of progressivist, even utopian, writing that argued for increased freedoms for children, particularly in Britain; the collective trauma of that conflict bolstered new approaches to childhood, alongside the changing court system and a falling birth rate. However, after the second world war utopian freedom-oriented thought reverted to a ‘child-centred’ pedagogy which placed more emphasis on protecting children, creating purpose-built environments for them separate from the adult world. In A Progressive Education? Laura Tidsall sums up the logic of child-centred education and parenting as seeing children as ‘fundamentally separate from adults, distinguished by their developmental immaturity’. Tidsall argues that this pedagogy took a half-hold over Britain and that it still exists today in tension with movements that want to do away with specially manufactured environments and supervision in favour of more informal methods of childhood management. When technology entered the scene it was not doing so to a previously stable understanding of childhood; the social upheavals of the 20th century had already created a volatile environment wherein the older generation was bringing up the younger in a significantly different world to the one they themselves had been raised.

    Part 2

    KIDS THESE DAYS

    This is an era when ultrasound scans are routinely shared on social media by expectant parents… Thus, the cliché of millennial children being ‘born digital’ might perhaps be updated to ‘preborn digital’’.

    The Modern Child

    After 1977, as home computers, games consoles, and later mobile phones in the 1990s became available, British children were among their most early adopters. In Young People and New Media Sonia Livingstone forwards that these children were ‘a distinctive and significant cultural grouping’ which pioneered the use of new technologies, due to existing in a stage of life characterised by learning and experimentation. ‘Cyber playgrounds’ were an environment of play unlike any that had come before them. However, whilst “children” may have been prominent adopters of new technologies in 20th century Britain, that conclusion does not satisfy the need to consider a multiplicity of childhoods. Multiple studies have demonstrated that ‘boys, older children and middle class children all benefited from more and better quality access’ to digital devices and then later the internet than girls, working class, and younger children. Helsper and Livingstone describe this in terms of ‘digital opportunity’, whilst a child may have access to a computer, they may also have poor quality hardware, connection, or be ill equipped with the knowledge and skills required to fully utilise the tool. The largest determining factor as to whether someone is an active web user has been found to be confidence, not age. In Olin-Scheller and Roos’ 2015 study they found that rural Swedish children only peripherally engaged with digital activities at both school and home, problematising the view that young people were ever a ‘homogeneous group of digital natives’. Indeed, today’s children generally, even those of privileged backgrounds, have been found to spend the vast majority of their time online on only a few websites, their lack of “travel” mirroring those with low opportunity in the physical world.

    Existing research such as that of Kidron and Rudkin has thus shown that young people, being ‘firmly on the lowest rung of the digital opportunity ladder’, are advocates for more management and control of digital environments. The authors of a 2017 report, The Internet on Our Own Terms, also found that British children wanted more regulation of online content and more control they could exert themselves over what they encountered online. In some respects these reports fly in the face of popular conceptions of children as natural-born inhabitants of the digital wilderness. To be clear however, these reports did not conclude children to favour restriction to freedom, but rather a greater balance between the two. Childhood is a process of development, a key element of which is moving from a state of high dependency during your first 5 years of life, through a state of semi-independence and self-care from 6 to 11, and towards increasing autonomy and growing reliance on peers over carers from 12 to 18. Therefore, children prefer an environment that can evolve with them, and be flexible in terms of the degrees of independence it allows.

    The 1970s and 1980s

    It is well known that play is among the most fundamental behaviours human beings engage in, and indeed is a signifier of intelligence in multiple species. Important to play itself is the environment it takes place in, as different environments allow for different kinds of play, and furthermore different people experience those spaces in different ways. Digital environments such as computers and games consoles, for example, can allow for a great deal of intellectual and social play, but evidently less physical sporting play. Since the late 1970s a loss of physical play for children was an area of keen interest to educators, commentators, and academics who feared a loss of tactility and healthy activity in play alongside a sense of a ‘centuries-old freedom’ being eroded. Indeed, these fears were not unfounded, a National Trust survey in 2016 found that today’s children spend half as much time outdoors as their parents did in 1970s and 1980s. A lack of outdoor play has been linked to a number of physical and emotional illnesses, depression, low educational achievement, and social abnormalities, so establishing the role of digital childhoods within this trend is important.

    In the late 1970s and early 1980s home-computing became accessible to an increasingly expanding audience with the release of models such as the Apple II, ZX Spectrum, BBC Micro, and Commodore 64. During this period however, whilst research into environments of childhood was prevalent, little was made of the role that digital technologies might be playing. Instead, the dominant theme of the literature was the urban child, and the ways in which the modern world restricted childhood freedoms more structurally, in both the sense of physical and social structures. Studies increasingly focussed on the positive effects of “getting out into nature” for city kids, with a particular interest in those living with poverty or learning disabilities. A 1973 report for the Department of Environment found that 75% of children, when asked to describe their favourite places, talked about spaces where they could play outdoors. However Peter Townsend’s Poverty In the United Kingdom (1979) found that working class children were four times less likely to have access to an outdoor space of size or quality enough sit outside in than middle class children. Rachel Kaplan’s Patterns of Environmental Preference (1977) focussed on longitudinal measures and found that her suburban-child participants reported beneficial outcomes up to several years after being sent on an extended nature-camp expedition. Similarly Behar and Stevens’ Wilderness Camping (1978) placed city children on a ‘residential treatment programme’ centred around outdoor activities, and found that the majority of their subjects demonstrated ‘improved interpersonal skills and school performance’ after the activity. Both of these reports chose children with learning disabilities and conditions such as ADD (today called ADHD) to study.

    Figure 4. A Prize Gold-Plated BBC Micro for the accompanying magazine, 1985.

    Kevin Lynch’s Growing up in cities (1977), funded by UNESCO, was also concerned with rising urbanism around the world, particularly the ways cities were designed to cater increasingly for adults in cars rather than children on foot. This affected all children but was acutely felt by girls who, being seen as more vulnerable to such dangers, were more likely to be restricted from street activity. Howard Gadlin’s Child Discipline and the Pursuit of Self (1978) connected urban environments to a new ‘modern ideology… in which the goal of individual self-realisation overshadows community solidarity and stability’. Gadlin argued that the enclosed environments of the city both mirrored and encouraged enclosed internal cultures that were based on a desire to ‘control the personality of children’. Colin Ward’s The Child in the City (1978) was a popular text of the period that similarly challenged the adult centric-ness of the city environment, arguing for ‘a city where children live in the same world as I do’. Ward highlighted “micro-places” such as footpaths, greens, and kerbsides that were important spaces of play and refuge for children. Matthews and Limb’s later study Defining an agenda for the geography of children (1999) also found that small informal spaces which could be manipulated by children were the most valued; these included trees, ponds, dens, lanes, and climbing and hiding spaces generally.

    The Child in the City, alongside texts like Urie Bronfenbrenner’s The Ecology of Human Development (1979) attempted to adopt an ‘ecologically valid psychology of development’, looking to study children in their “natural” environments as a direct response to the perception that children were losing them. During this period it is certainly true that city planners held simplistic notions of children characterized by a concept of a “universal child” which of course excluded lower-income, non-white, and female childhoods that typically had less access to the cars and technologies that facilitated their concepts of late 20th century life. The overwhelming academic cause of the period was an attempt to understand how cities could be designed differently to better accommodate young people, however this too skewed towards the types of childhoods that the academics writing the studies had had.

    One contrary voice of the period however was Alasdair Roberts, whose Out to Play: The Middle Years of Childhood (1980) argued that childhood games and play were just as lively as they had ever been in Britain. He popularised the idea of the “middle childhood”, from ages 8 to 13, as a time he had observed during the 1970s as one ‘of secret societies and clubs with many rules… the age of collecting (sea-shells, football cards, stamps), of jokes and riddles and odd customs’. Roberts’ research was specifically focussed upon Aberdeen however, so his conclusions drawn across the whole country stretch thin, but his work is a strong piece of evidence to support regional differences in childhood trends. In provincial cities such as Aberdeen, it is logical that urbanism was less pronounced. Digital environments were thus not considered drivers of the “decline” of childhood during this period that so many publications lamented, instead the city and adjacent factors such as cars and insularism were singled out. Cars in particular were a force that enhanced social polarization, as those children who did not have access to a car or a parent available to drive them had less access to activities and nature. Busy roads and pollution also tended to centre around poorer areas, reducing independent mobility. As can be seen in Newson and Newson’s Seven Years Old in the Home Environment (1976) however, whilst the language of the time always discussed a loss of “childhood” play, in majority the play that was under threat was boys’ play, as girls were already more restricted to indoor activities by the expectations of parents and society at large.

    During the 1980sthe field of research began to widen to include children more generally, as supposed to just those living in poverty or with learning disabilities, and indeed adults were increasingly included in studies looking at the negative impacts of urban environments. Gary Evans’ Environmental Stress (1982) pulled together much of the disparate research of the 1970s into one volume that attempted to systematically explain the impacts of ‘noise, heat, air pollution, crowding, and architectural dysfunction’ on city dwellers generally. Similar studies were those such as Altman and Wohlwill’s Behaviour and the Natural Environment (1983)and Roger Ulrich’s Aesthetic and affective response to natural environment (1983). Research projects into the positive impacts of nature on children continued with broader bases of participants, like Kaplan and Talbot’s Psychological benefits of a wilderness experience (1983) that found time in “wilderness” to give children ‘self-confidence and an improved sense of self-identity’. The founding of the Children, Youth and Environments academic journal in 1984 was proof that research interest in this field had become substantial, however one of the central concerns that had given rise this interest was proving to be only a concern. Robin Moore’s Childhood’s domain (1986) reported that 96% of urban children (aged 9 to 12) told researchers that outdoor places were their favourite places, children were not abandoning the outdoors. Moore coined the term ‘terra ludens’, the idea of a child’s personal play spaces being a crucial developmental support mechanism that gave them an ‘intuitive sense of how the world is by playing with it’. This was a recognition that environments are the necessary nexus where concepts of place and society converge in a child’s life.

    Whilst research proved over and over again the benefits of outdoor natural activity for young people and adults, (see: Mary Ann Kirkby’s Nature as refuge in children’s environment, Rachel Kaplan’s The Experience of Nature: A Psychological Perspective) and that urban children, particularly the working class had diminishing access to it, there was no indication that children were “going off” the outdoors, even though this was a fear that prompted much of the research to begin with. What the concerned academics of the 1970s and 1980s revealed and detailed were the forces of the period that were pushing children towards the indoors and digital environments such as newly released games consoles like the NES and Atari 7800. The losses of childhood freedoms in the real world left a slack that digital freedoms could pick up.

    The 1990s

    The theme of “children and the city” continued in the 1990s literature, this time in extensive monographs that utilised all the research of the previous two decades. This included Boyden and Holden’s Children of the Cities, Lave and Wenger’s Communities of Practice, and Sheridan Bartlett et al.’s Cities for Children. This was now well understood and accepted theory, but new areas of academic interest were arising, most significantly around the role of digital devices in children’s lives, the fragmentation of traditional family and community structures, and the idea of a person’s “independent mobility”. Mayer Hillman et al.’s One False Move (1990) found that whereas in the 1970s nearly all British 9-year olds were allowed to cross the street independently, now only half were.

    Nikolas Rose’s Governing the soul (1990) suggested that childhood was undergoing a ‘process of bureaucratisation’, by this meaning that their participation in public spaces and activities was being constrained as focus increasingly rested on ideas of forwarding individual identity and agency in children. Ulrich Beck in Risk Society (1992) put that technology, by facilitating increasingly diverse individualised ways to consume media, was accelerating a ‘western trend towards individualisation’. Communal spaces such as parks, streets, and plazas which catered to a generalist user base were falling out of favour compared to individualised places such as private gardens and living rooms. Family formations had also been changing throughout this period as children increasingly moved away from their hometowns once reaching adulthood, isolating their own children from traditional familial networks such as cousins and grandparents. This left a time vacuum in children’s lives that technology was able to fill, home and mobile phones picking up the slack of a lost physical connection and enabling children to keep in contact with distant family and friends.

    Thinking solidified around theories of the restorative qualities of natural environments to city dwellers, as both social science and medical studies continually affirmed the concept to be true. A 1997 study even found that a chronic lack of play and physical touch during childhood could result in developing a brain ‘20 percent to 30 percent smaller than normal’. Linked to this, academics, educators, and commentators turned their eye upon home technologies like televisions, mobile phones, games consoles, and personal computers as they continued to increase in prevalence, and upon the world wide web after it launched in Britain in 1991. Ray Lorenzo’s Too Little Time and Space for Childhood (1992) and Neil Postman’s The Disappearance of Childhood (1994) both identified the television in particular as part of a wider problem of “lost childhoods”, Postman writing that ‘children today are captive in their homes… They are institutionalized, over programmed, information stuffed, TV dependent, ‘zoned in’ and age segregated’. It was during this period of writing that attitudes started to shift on technology, which previously had not been considered an influential factor over children’s lives of learning and play, and was now starting to be seriously considered.

    Not all interest in technology for children was pessimistic however, indeed there was a great deal of optimism during the 1990s over the role that technology would play in children’s lives in the present and close future. The creation of the internet in particular brought with it a wave of utopian ideals, one such being that online, all users were equal. The idea of a connected, intelligent, globalised world resurrected some of the visions of the inter-war years. Richard Lanham’s The Electronic Word (1993) argued that digital technologies, with the particular aid of the internet, would enable a mass form of democratic literacy that would allow countries to ‘enfranchise the public imagination in genuinely new ways’. Likewise Jon Katz in Media Rants: Post Politics in the Digital Nation (1997) saw the digital as a means of children’s liberation from the increasingly restrictive adult physical world. The online space, seemingly an infinite space of possibility, tempted grand claims of hope or despair on the behalf of commentators of the time. However, because the internet was designed as a universal tool this necessarily meant that no special concessions were made to make it an accessible tool for certain groups. As Kidron and Rudkin point out in their Digital Childhood report, in the early days there were ‘not any design concessions for child users’, and that legacy continues today despite the fact that children make up over a third of the internet’s 3 billion users. A study in the Contemporary Issues in Early Childhood journal in 2003 concluded that children found the internet a more interesting and useful tool when presented with a ‘child-friendly interface’. At the time the New York Times disagreed with Jon Katz’s assessment of technology as “liberating”, writing that ‘The computer teaches a child to expect to be entertained; the lump of clay teaches the child to entertain herself’.

    Figure 5. A child watches TV in a TV Shop, 1993.

    Peter Buchner’s Growing up in Three European Regions (1995) explained how the rapid development of technologies had invalidated many parents’ frames of reference for childhood and as such they were forced to become ‘involved in a process of negotiation with their children over mutual identities, rights and responsibilities’. The 1950s model of the nuclear family was giving way at the end of the century to what Sonia Livingstone called the ‘democratic family’, wherein traditional parental and child roles of the authority and the subordinate were replaced by a mutual expectation of love, respect, and intimacy. This was also linked to ‘explicit discourses of identity construction’ in a neoliberal individualist society where children were encouraged to develop identities based on personal preferences over a sense of belonging to a particular community.

    The long-term implications of technologically saturated childhoods were as of yet unknown, giving rise to many hopes and fears but also contributing to individualist perspectives on childhood that favoured the use of home technologies. Marco Hüttenmoser’s Children and their living surroundings (1995) showed the downward spiral a neighbourhood enters in to when children are restricted from outside play, deteriorating social cohesion and a ‘society capable of mutual help’, thus making it less likely for children to be allowed out. Rebekah Coley et al.’s findings in Where Does Community Grow? (1997)supported this thesis and furthered that natural spaces in an area which allowed child’s play were particularly beneficial to fostering communities. Representations of nature via technology were tackled by Edward Reed in Encountering the World (1996), finding that television and computer screens could contribute to learning processes, but were poor replacements for direct experiences that facilitated a ‘dynamic, dense, multisensory flow of diversely structured information’.

    Linking broader social issues to discussions of childhood environments became increasingly popular in the late 1990s, particularly criticism of neoliberal ideology that had already come under significant attack from the field in the form of criticisms of “individualism”. The dawn of the personal computer ‘coincided with the widespread deregulation of the financial services industries in the United States and UK’ and the computing industry was moulded by that environmental context in its early years. Wheway and Millward’s Facilitating play on housing estates (1997) criticised the practice of putting children’s spaces such as playgrounds and skateparks “out of the way” behind buildings and on unwanted bits of land because it cut children off from the rest of society when ‘they want to be where it’s at, to see what is going on, to engage with the world beyond’. Children’s access to transport, social spaces, and shops was framed as key to maintaining them as integral participants of society. Access to natural spaces was again identified as beneficial in giving children an ‘increased sense of personal autonomy, improved self-concept, a greater capacity for taking action and being decisive’. Digital environments were thus eschewed as pale imitations of “the real thing” when it came to education and play, Lieberman and Hoody’s Closing the Achievement Gap advocating for outdoor classroom environments physically separated from digital devices as the best way to improve academic achievement. Likewise Sorrayut Ratanapojnard’s Community-Oriented Biodiversity Environmental Education (2001) also demonstrated that children learnt more in ‘hands-on’ outdoor classrooms than on a standard indoor curriculum.

    Sandra Calvert’s Children’s Journeys through the Information Age (1999), encapsulated many of the arguments that would follow in the first decade of the 21st century, construingchildren as innocent participants in a process of their own decline, pushed towards embracing a technological ‘media environment’ that was damaging for their healthy development. Kirsten Drotner’s Dangerous Media? (1999) similarly described mass media as a ‘moral threat’ to young people. Somewhat ironically, the media of the period’s obsession with the dangers of street play encouraged parents to instruct their children to stay indoors, and thus set off a new paranoia about the media itself. Whilst technology like mobile phones meant some parents gave their children more time and space to play this did not necessarily translate into more “freedom” because, as Freeman and Tranter contend, whilst children may have been physically alone they were still within the parental ‘gaze’, always on call. This was a trend that would only continue in the following years as the advent of smart phones and GPS tracking allowed parents even greater remote control over their children, what Lenore Skenazy in Free Range Kids (2009) called ‘anxiety on speed dial’.

    In the absence of time and space for “free play”, British children’s activities became increasingly structured, scheduled, and organized during the 1990s, centred around pre-booked sessions of sports, hobbies, and lessons. This came hand-in-hand with an increased commercialisation of play whereby opportunities for free play, in both senses of the word, were reduced and considered lower status than those with an associated cost. This disadvantaged poorer families who could not afford to take their children on as many activities, and as such were pushed more towards the use of digital play. Furthermore, the cars being used to take children to these activities contributed to the problem of unsafe outdoor space for free play in working class neighbourhoods. Timetabled physical activities, whilst being physically healthy, have also been proven not to provide the same mental benefits for children as self-directed play, including a comparative lack of stimulation of cognitive development, social and language development, independent learning, and ability to cope with stress and trauma. The loss of children’s independent mobility to adult-dependent mobility was beginning to be linked to rising problems of ‘obesity, diabetes and other diseases associated with more sedentary lifestyles’. The literature’s narratives over the role of digital devices in this process, however, was ambivalent in a technological environment that was evolving faster than academic studies and research.

    The 2000s

    Children will always be children and will always find a way to play’.

    The first decade of the 21st century saw the real emergence of what became a defining argument surrounding the role of technology in children’s lives, a declensionist narrative of childhood degradation based around the rise a ‘media-saturated environment’ and the fall of “natural” outdoor activity. As is evident from looking at research from the previous three decades however, Britain’s ‘screen entertainment culture’ was more of a symptom than a cause of the loss of childhood freedoms, and children particularly of disadvantaged backgrounds were pushed towards technology by outside factors. Exemplary of this, Wigley and Clark’s survey Kids.net (2000) found that working class children were significantly more likely to have a television, games console, or video recorder in their rooms than middle class children. Digital spaces offered play and freedoms to children that they could no longer enjoy as easily outside, but as the Digital Education Act of 2003 proved, the digital world too was becoming a more highly regulated and less “wild” place.

    Figure 6. A child recharges their mobile phone credit on a street machine, 2009.

    Academic research increasingly reflected the domesticity of British children’s lives, Nancy Wells’ At Home with Nature (2000) looking at children who moved house to an area with greener views out the window and finding marked improvements in peacefulness and ability to concentrate. Similarly Wells and Evans’ Nearby Nature (2003) found that children with more natural space near home were less prone to anxiety, depression, and behavioural problems; the same children also rated themselves higher than their peers on measures of self-worth. Andrea Faber Taylor et al.’s Views of nature and self-discipline (2002) randomly assigned a group of girls to architecturally identical apartments in the same building and discovered that the greener a girl’s view from her window, the higher she scored on concentration tests. The “child in the city” literature such as Louise Chawla’s Growing Up in an Urbanizing World (2002) and David Driskell’s Creating Better Cities with Children and Youth (2002) took on a more reformist bent, lamenting the decline of ‘street culture’ and even the shift from family television time to ‘bedroom culture’. Such works moved from simply the study of the issues of urban childhoods to advocacy for ways in which change could be enacted. Freeman and Aitken-Rose’s Future Shapers (2005) cross-examined urban planners to find that children were only considered in the planning of ‘recreation spaces’, but ignored in the planning streets, houses, shops, leisure facilities, and infrastructure.

    Technology became a key facilitator of childhood sociability in 21st century Britain. A study from the Journal of Emotional and Behavioural Difficulties (2002) found that British children who did not own a mobile phone, those of working class backgrounds in majority, were especially vulnerable to social isolation, as the phone had become a key device around which friendships and communities were built. The digital divide also exacerbated pre-existing social divergences as highly social children more readily adopted digital devices as a means for deepening and expanding relations whereas less social children showed the opposite pattern. These children were thus not just socially excluded, but also denied an opportunity to develop digital skills. Those children who did have mobiles were also more likely to be able to cross ‘hitherto distinct social boundaries’. The problem was not always access to devices like mobile phones however, but also the skills necessary to operate them as Dominique Pasquier highlighted in Media at home (2002); both girls and working class households demonstrated a ‘problematic skills gap’ in the use of digital devices, as providing an opportunity for access to technology was easier than providing knowledge for use.

    Digital devices that allowed young people to “travel”, either physically or online, had mixed impacts upon their freedoms. One the one hand as Williams and Williams’ article Space Invaders (2005) suggested, the expectation of parents to be able to communicate with their children at all times, and the children’s constant awareness of being under surveillance, created an environment where children felt they had no private space which can be damaging to mental health. On the other hand Marilyn Campbell’s The Impact of the Mobile Phone on Young People’s Social Life (2005) demonstrated the negotiating power that a mobile phone could grant children when discussing curfews and boundaries for roaming with their parents, allowing more freedom than peers who did not have phones. On a more structural level familiarity with technology was rightly assumed to be a key skill for children in a future that would see a job market progressively more reliant on digital literacy; so whilst many areas of adult society and much of the academic literature encouraged children to get outdoors, Britain at large was forging an environment where the skills associated with outdoor activity were less valued than those of the digital and indoors. Todd Oppenheimer’sThe Flickering Mind (2003) reflected this duality of the moment, whereby children were on a boundary, as he saw it, between sensibly harnessing technology to help them become ‘creative problem solvers’ or falling victim to ‘computerisation and commercialization careening out of control’.

    Contrarily, whilst time spent in digital environments was often framed as a detracting from time spent in physical outdoor play, Holloway and Valentines’ study Cyberkids (2003) found the opposite. Echoing earlier studies such as the Department of Environment’s 1973 report and Robin Moore’s Childhood’s domain (1986), they found that children overwhelmingly preferred to be outside if the weather and light allowed. Time spent in front of a TV, phone, or monitor tended to replace ‘doing nothing’ time where they weren’t allowed outside or their peers weren’t. Later studies have also shown technology use to promote social interaction and allow children of differing abilities to become a part of everyday social practice. Furthermore, their research highlighted how computer use was highly controlled and negotiated in homes, and that parents were not at all powerless to prevent children being “drawn in” to screen-use if they wanted to stop them. The crux of the problem lay neither with children, parents, or the technology, but with the reality, or the perception of the reality, of a dangerous outside domain.

    Building on a now well-established literature of the benefits of nature, an increasing body of work was produced in the 2000s focussing simply on the benefits of play itself, under a fear of regimented children’s’ lives leading to depression and stress. Garry Landreth’s Play Therapy (2002), Joe Frost’s The Developmental Benefits of Playgrounds (2004), and Louise Chawla’s Growing Up in an Urbanizing World (2002) all advocated for the therapeutic powers of play that yielded ‘positions of cognitive clarity, power, and primacy to the player’. Chawla and Malone’s Neighbourhood Quality from Children’s Eyes (2003) gave insight into the ways play in natural environments created confidence in children as they were more able to physically affect them, conversely urban environments over which children could exert little control often instilled feelings of powerlessness. As Karen Malone pithily identified in her study of children in suburban Sydney: ‘places shape children and children shape places’. The safety of play in the digital world was also coming under question, Rachel Pain’s Paranoid Parenting? (2006) describing online bullying and harassment as ‘far from a parental bogeyman’. David Cohen’s The Development of Play (2006) saw the play crisis as one that also included the adult world, asking: ‘If the purpose of play is to prepare the child in various ways for adult life, what is the motive for adult play?’. As Cohen and many other academics and commentators saw it, the child was a useful tool through which to understand and criticise wider society, as children were seen as some of that society’s most vulnerable members.

    A particularly prevalent criticism of the period was popularly put by the influential British sociologist Frank Furedi in Therapy Culture (2004). Furedi was primarily concerned with the number of safety regulations being erected around children as part of what he called the ‘new security state’. This proliferation of regulation, he argued, was eliminating any possibility for learning, excitement, play, or risk, and shifting perspective away from the real dangers of modern childhood, the restrictions themselves. Marianna Papastephanou in Education, Risk and Ethics made the parallel point that western structures of education based around a ‘discourse of control’ were unhelpful in that they did not reflect the reality of risky uncertain human lives. Her observational studies led her to the conclusion that both children and adults had a ‘longing for the risks that make life meaningful’ but were being consistently denied them. The ‘billion-dollar industries’ in technology surrounding children were making increasingly enviable profits from the sale of both security and safety devices as well as indoor education and play devices.

    Whether these restrictions, often bolstered by the use of technology, were even making people safer was also on the agenda, or whether as Tim Gill’s No Fear (2007) put it, the risks of play in modern city streets had been ‘blown out of all proportion’. Torin Monahan’s Questioning Surveillance and Security (2006) pointed out that whilst surveillance technologies like CCTV cameras had been shown to be effective in tracking down criminals after a crime, ‘they do not actually prevent or reduce crime in any significant way’. Cindi Katz in Power, Space and Terror (2006) wrote that in relative terms the dangers of cars or strangers were nowhere near as pressing as those of poverty and inequality between children, and that street crime had been falling since the 1970s and 1980s, meaning parents generally played in more dangerous streets than those they denied their children.

    Restricting children to the indoors and digital environments created its own dangers a 2006 report for the NSPCC found, citing rising levels of obesity, diabetes, depression and other health problems that proportionally were more dangerous to children than what mostly concerned policy makers and parents. Technology’s role in this was multifaceted: through television and online media it propagated not wholly unjust fears over children’s outdoor safety, through CCTV, mobile phones, and GPS trackers it facilitated restrictions over childhood freedoms, and through computer games and television it offered an escape to children from those restrictions. A longitudinal study in 2010 found that long periods of time spent playing video games gave children difficulties with their attention spans, however a separate study from 2012 found exactly the opposite, so the health impacts of this too was ambivalent.

    Idolising Children (2007) by Daniel Donahoo ascribed issues of lost childhood freedoms to a the slightly contrary problem of parents wanting too much for their children. He argued that the promise of technology helped to foster ideas of impossibly “ideal” childhoods that neither the children or the parents could realistically achieve. As such in the attempt to create perfect personalised childhoods parents, educators, and policy makers were inadvertently making them worse. This trend extended far beyond Britain or “the west”, Pergrams and Zaradic’s Is love of nature becoming love of electronic media? (2008) found a fundamental shift away from ‘nature-based recreation’ globally over a period of 50 years by looking at visits to national parks. Along similar lines Dorothy Singer’s Children’s pastimes and play in sixteen Nations (2009) found a universal erosion of childhood across continents due to a lack of ‘experiential learning opportunities’. Furthermore parents’ ideas around what outdoor “play” was were also beginning to change, Kelly Fischer’s article Conceptual Split? (2008) finding that unstructured play was increasingly seen as a waste of time by parents who were also more likely to regard structured activities and sports as play. This was partly due to neoliberal competitive conceptions of children centred around personal narratives of success, that “messing around” was not productive.

    Physical city environments were also still changing in the 21st century in ways that disadvantaged children. Whilst roads were sometimes being built more considerately, new housing developments in particular were constructed to child-unfriendly specifications. The housing stock generally since the 1980s grew larger on the internal footprint but smaller on the external, so people’s indoor living space grew at the expense of yards and gardens. Furthermore, modern housing façades were increasingly built with smaller windows in the front, garages, and smaller front porch areas. This meant that children had less physical outdoor space to play in modern developments and also, due to the houses being built more insularly, parents and neighbours had less ability to keep a passive watch on their children in the lane. The demographics of families also changed during this period, as parents tended towards having fewer children and guarding the ones they had more carefully, thus creating the phenomenon of “helicopter parenting”. One of the promises of technologically interwoven childhoods for the adult world was the possibility of persistent monitoring and thereby controlling and safekeeping of children in what Tonya Rooney called as a ‘just in case’ model in Trusting Children (2010). However, as she warns: ‘Rather than simply “playing it safe”, parents and carers may be depriving children of the opportunity to be trusted and to learn about trusting others, and the opportunity for growing competence and capacity that can result from this’. Indeed, little evidence has been presented to show that people in the 21st century are more untrustworthy than in prior decades, but there is increasing evidence of a culture of suspicion.

    By 2010 academic research was beginning to assess digital environments of childhood holistically, incorporating them into understandings of environment more generally, and as such recognising the complications that arise by thinking of digital devices as simply either “good” or “bad” for children; some were even advocating for a blended use of technology in order to help children get outdoors, advice that would have raised many academic eyebrows only a few years prior. In terms of education whilst the internet and other digital tools, providing the world at their fingertips, discouraged the use of a child’s memory for the memorising of specific facts, they encouraged an ability to scan information rapidly and efficiently. As Jim Taylor wrote in How Technology Is Changing the Way Children Think, not having to retain this sort of information in the brain allows it to ‘engage in more “higher-order” processing such as contemplation, critical thinking, and problem-solving’. Digital technologies certainly comprised an element of the story of the loss of freedom in British childhoods, but that element was more liminal than much of the literature examined individually suggests. Taken together over time, however, an image emerges of digital childhoods not as piteous sources of degradation but as an element enmeshed in a much wider diverse narrative of shifting social dynamics of power, control, and freedom in the late 20th and early 21st centuries.

    CONCLUSION

    An apparent cultural paradox lies at the heart of British digital childhoods today, and indeed our digital lives more broadly: a culture of individualism that operates inseparably withina culture of connectivity. Technology is ever-more tailored to the individual in terms of personalised “smart delivery” of recommendations, advertisements, news, and voice-activated digital assistants, however at the same time it is also ever-more universal with the majority of this individualised content being served by a minority of companies, devices, and services. The mobile phone, television, and computer have acted as facilitators for a new spontaneity and flexibility in young people’s lives, being able to arrange and rearrange social events with the rapidity of a ‘more fluid culture of information social interaction’. However this option has been historically more open to some more privileged children than others, and at the same time technology has been a facilitator of increasing restriction upon children’s lives. Yet the spectres of the “screen zombie” and occasionally the “nature nymph” still hang over the debate, encouraging a polarisation of opinion over whether children have been “taken in” by technology or the belief that they never could be. Either way the idea that ‘children have won the battle, they are exactly where they want to be’ does not capture the complexity of the situation.

    The impacts of digital technologies are not inevitabilities that form changes beyond human control or understanding, they are socially shaped elements of both childhood and adulthood. Adopting this perspective allows the reader to see these devices as within a space of continual negotiations in shifting economic, social, and political circumstances. As children’s lives become ever more digital it is important that historians begin to grapple with the digital environment and conceive of it as a “space” where children have existed, a space that both moulds children, is moulded by them and other factors, and is experienced differently by different people. Future research could pursue multiple avenues, but particularly helpful would be interdisciplinary research alongside computer scientists who could investigate specific design qualities of technologies and how they have influenced behaviours over time. Oral histories of digital childhoods would also constitute an important archival resource for the future when academics want to consider the crucial transitionary generation between the pre and post digital worlds.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 22nd of January 2026

  • Children of the Anthropocene: The Implications of the “Human Age” for the History of Childhood

    Positioning the History of Childhood Within the Anthropocene Debate

    Across the historical discipline the concept of Anthropocene is being used to redefine how we interpret and describe the relationships between humanity and the rest of the natural world. The notion of a “human history” and a “natural history” existing as two separate streams of academic interest has met the Anthropocene confluence, thrusting the two together into one inseparable river. Our acceptance that ‘Humanity has initiated an environmental “phase shift”’ as Jason Kelly puts it, has opened new fields of historical enquiry into the manner in which people influence the environments of which they are a part, and the manner in which those environments influence them in turn. The ever-increasing contemporary relevance of climate change to the everyday of people’s lives has furthered this interest, and has led many historians to note the ways in which the Anthropocene does not constitute a universalising force. That, as Jason Moore writes, ‘because of existing power relationships, the ‘new reality’ will be more ‘real’ for some than for others’. Gender historians have taken a leading role in considering how the Anthropocene relates to the disparities between men and women as both contributors to, and subjects of, the consequences resulting from our new “human age”. Since Val Plumwood’s Feminism and the Mastery of Nature (1993) it has been recognised that both women and the natural world have been depoliticised, diminished, and distanced from the “standard” social order by the same dualistic enlightenment narratives. Both have been othered, defined as inferiors by an elite that has pretended ‘an illusory sense of autonomy’ rather than acknowledge its reliance upon them. The introduction of Anthropocene has evolved these arguments, shifting historians toward more wholistic understandings of an earth-system of wider interconnected dependency. As Jessica Weir notes, the Anthropocene has proven to be a useful conceptual tool in criticising prior narratives of ‘hyper-separation [that place] humans in a relation of mastery with respect to earth others and limit their capacity to respond to ecological devastation’.

    Many other historians, including those of empire, race, ethics, and materialism (to name but a few) have also adopted the language of Anthropocene. Importantly, however, none have done so wholesale and instead each new voice has provided critique and novel perspective on the conceptual framework the Anthropocene provides. Dipesh Chakrabarty and Kathleen Morrison have argued that from a postcolonial perspective the term’s use is too often western centric and shaped by a set of ‘cultural blinders [that] impede our understanding of the complex and diverse history of the earth system’. This is likely because ‘much of the discourse on the Anthropocene has been dominated by Western scientific perspectives’. Katherine Gibson and Julie Graham have put up ‘capitalocene’ for consideration as an alternate that highlights the economic system’s role in contemporary climate chaos. Donna Haraway has contended ‘cthulucene’ and ‘plantationocene’ to be non-anthropocentric descriptors that better include life forms other than humans within their remit and ‘the ways that plantation logics organize modern economies, environments, bodies, and social relations’. Ultimately however, whilst the criticisms behind these proposed alternate terms are valid, they have remained as subcategories rather than risen to prominence because of Anthropocene’s preestablished prevalence and the fact that many academics see a utility in positioning ‘mankind’s growing influence on the planet’ at the centre of the debate during the present age of threat to the planet’s existing climatic structures. ‘Saying that we live in the Anthropocene is a way of saying that we cannot avoid responsibility for the world we are making’, as Jedediah Purdy frames it.

    The possibilities of exploring the history of childhood through the lens of the Anthropocene is the area of study this essay will seek to define. Compared to histories of gender or race this relationship is an underexplored one within the historiography to date, although outside the discipline there has been greater interest in this line of enquiry. Education studies, perhaps unsurprisingly, has proven the pioneer in exploring the relationship between children, childhood, and the Anthropocene and has sought to ‘understand children and their lives as social actors enmeshed in complex social and material networks’ and to challenge the ‘presumed naturalness of childhood’, as David Blundell argues in Children’s Lives Across the Anthropocene.  However, where articles such as these are rightly concerned with what Lili-Ann Wolff calls ‘the mission of early childhood education… in the epoch of the Anthropocene’, the lacking angle of the historian is one that explores how the Anthropocene as a concept can help us understand the manners in which environments have influenced childhoods in the past and how children have influenced them in turn.

    In great majority the historical analysis that has addressed this relationship thus far has come from the works of historians of childhood and has focussed on how children’s spaces have been constructed, construed, and controlled by adult society. In the British context Matthew Thompson’s key work Lost Freedom: The Landscape of the Child and the British Post-War Settlement is a typical example of such work that argues for a post-war transition of the relationship between children and the environment tending toward a ‘loss of freedom’ and a ‘turn toward increasing protection and restriction’ due to parental fears of strangers and cars. Similarly Sian Edwards’ Youth Movements, Citizenship and the English Countryside interrogates how the concept of the rural was adopted by organisations such as the scouts as an ‘antidote’ to a problematised urban sphere. In the study of more recent history there has been an interest from historians such as Richard Louv in the denaturalisation of childhood in the “digital age”, that which mainstream media has branded ‘nature deficit disorder’ or what Ian Rotherham calls ‘cultural severance’.

    There have been fewer works on this topic from an environmental historian’s perspective, those there have been beginning with the history of the American frontier. Elliott West’s Growing Up With the Country (1989) is one foundational text of this type wherein West finds that the fundamental difference between children and their elders on the frontier was in how they related to the landscape. For adults the frontier was something new but for children it was familiar, giving them a ‘kinship’ with it that was unique. The only attempt in monograph at a work such as this since has been Pamela Riney-Kehrberg’s The Nature of Childhood: An Environmental History of Growing Up in America Since 1865 which, similarly to Lost Freedom, charts what the author frames as a declension in the quantity and quality of interaction between children and environment over time. However, as with the publications of education studies, the central perspective at the heart of all these works is one of the adult. The trend has been to ask how factors such as parenting, social policy, and architecture can influence and shape children and childhoods via changing environment, and whilst these are important questions to ask, there is a surprisingly absent space left for “child-centred” and “environment-centred” narratives. This is one thing the introduction of the Anthropocene concept to the history of childhood affords, as with histories of gender and others before it, an opportunity for historians to highlight the unique experiences and relationships of children with their environments that stem from their independent agency. Whilst we might argue with some justification that the use of an alternative term such as “adultocene” or “ageocene” would better suit our needs, it will be more useful (and indeed, simpler) to explain how stories of children and childhood can be brought forward and made distinct from the “human experience” when using the Anthropocene as a framework of understanding.

    Approaching the history of childhood with this perspective gives the historian new questions to ask of their source material and new ways of answering them. How do children, through exploration, work, and play shape their own environments? How do their wants and needs, or those perceived, influence the attitudes and actions of adults toward children’s environments? In what ways does the Anthropocene as a force and a concept uniquely affect the lives of children, and in what ways do children affect the Anthropocene? Before investigating these further however, it must be acknowledged that it would be hypocritical whilst proselytising the importance of acknowledging divergent experiences of environment not to point out that “childhood” itself is not a universal experience. Within childhood there are a multitude of identities that will to greater or lesser extents modify one’s relationship with the Anthropocene. As one example from Fikile Nxumalo’s Situating Indigenous and Black Childhoods in the Anthropocene:

    ‘school gardens for young Black children in urban schools are often positioned from deficit perspectives, as a way to bring nature to certain children who lack it. Here nature becomes entangled with anti-blackness as it is positioned as a site of potential transformation for Black children deemed at risk or lacking “normal” connections with nature’

    A similar argument could be made with regard to working class children who are perceived to be lesser for their lacking something that those of middle-class parentage typically have. As Affrica Taylor summarises: ‘The assertion that children need nature has become commonplace, but should we ask which children?’. This essay, iterating and evolving upon the existing historiography and source material pertaining to British childhoods (predominantly southern urban ones), cannot claim to speak with authority on the childhood or environmental experience. However, it is an example of how the Anthropocene can assist us in examining a history of childhood, and a pointer toward new avenues for inquiry and potentials in exploring this conceptual marriage further.

    “Presencing” Children in the Anthropocene

    Despite the present dearth of historical material that seeks to presence children in the Anthropocene, the basis for study of such a relationship is strong, as western societies have long presupposed a connection between children and the natural world deeper than that of their elders. As historians of race and gender have already demonstrated, the dichotomy built between the human and natural worlds in the past has been inextricably bound to dichotomies drawn between certain groups of people and what society considers the “normal”. Anthropocene narratives that require a radical shift in the way humanity as a species constructs its own image of self are useful challengers to these old oppositions. However, for this essay it is important to understand the enlightenment philosophies of the 18th and 19th centuries which defined (and still define) the strong perceived links between children and nature in European society today before we can appreciate how the Anthropocene concept contests them.

    In broad perspective the enlightenment birthed two competing schools of thought regarding children and environment, the rationalist, and the romanticist. The romantic movement began with works such as those of Jean-Jacques Rousseau and then William Blake, Mary Shelley, and Alfred Tennyson in reaction to the increasingly rationalism-informed industrial European world those authors inhabited. The predominantly emotional countercultural arguments they presented knotted children and the natural world together as joint symbols of hope that were not only innocent and pure in and of themselves but lent an innocence and purity to one another. Their relationship was symbiotic, children were a part of the natural world in a way that adults were not, and could not, be. As Rousseau argues in On Education (1763), childhood should be defined by teaching that is ‘beyond our control’ and follows ‘the goal of nature’. William Blake’s “Nurse’s Song” in Songs of Innocence and of Experience (1794) similarly conjures images of an ideal youth as natural and pure:

    ‘When the voices of children are heard on the green

    And laughing is heard on the hill,

    My heart is at rest within my breast

    And every thing else is still’

    On the other, more instrumentally influential, side of this debate were the rationalist thinkers such as René Descartes, Immanuel Kant, and John Locke. Their lines of philosophy, just as present in the 21st century as those of the romanticists, saw nature as a force of reason, the basis of order and reason, even. However, where romanticists sought to learn from a nature they saw as inscrutable, the rationalists desired to understand and control a nature they considered as being fundamentally conquerable. John Locke’s An Essay Concerning Human Understanding (1689) exemplifies this approach with his famous description of children as being ‘tabula rasa’, a blank slate. In Locke’s view the child, like nature, would become wild if not carefully managed, but with the correct instruction they could be formed into an instrument of reason. Whilst it is tempting to reduce this tension between romanticism and rationalism to “nature versus nurture”, as William Cronon and Thomas Dunlap warn in Faith in Nature, both ideologies incorporated aspects of the other and looked to the natural world as a unifying, present force that could replace a role in society that had traditionally been fulfilled by religion. As Sara Maitland writes in Gossip From the Forest: ‘wilderness finds its complement and counterpart either in conceptions of childhood moral innocence or the child as tabula rasa’. Most importantly for this essay however, both drew a line between the natural and human worlds, although for different purposes, and both identified children as beings who could permeate that boundary to some extent.

    In the contemporary context these two philosophies still carry weight in how humanity responds to the environmental consequences of the Anthropocene in regard to children. Romanticism has come to play an important role in many environmentalist movements, best exemplified by the rise of Greta Thunberg and the global youth climate strike movement that, whilst being symbolic of the agency of the child, are also caught up in ‘environmental stewardship discourses that position certain children as future saviours of nature’. The same can be said for the rationalist perspective prevailing in technocratic circles who see environmental issues as ‘merely a physical Earth problem, and not an ethical one’ and view the young as saviours of the status quo via theorised future innovations and ‘the promise of one more generation’. These forms of romanticism and rationalism are dangerous as they detach the adult world from any responsibility or agency in addressing climate concerns.

    The introduction of the Anthropocene concept uproots this rationalism/romanticism dynamic. Far more than being a simple obstacle for humanity to solve, the reality of the Anthropocene has implications that have ‘the potential to challenge conventional ways of seeing those constructions of nature found at the heart of Enlightenment modernity and confront its contradictory positions’. Its very existence is proof an implicit and deep connection between humanity and the natural world that sets aside the notion that they are antitheses of one another. Indeed, it highlights how much they are the same. This realisation deromanticizes the natural and childhood worlds, unlocking them from the fairy tale, almost orientalist perspective with which they have been perceived. Free from this timelessness they can be considered more as active agents of change that have the ability to play important roles within the global network of factors that has brought about the birth of the Anthropocene. In the new “human age” where humanity has come to be seen as the integral operator in the earth’s “natural” systems, what the romanticists posed as a force opposite to that of nature has since become it. At the same time the Anthropocene undermines the empiricist perspective by showing that whilst humanity has the capability to influence the natural world, they cannot remove themselves from their relationship with it. Having made themselves more integral to the earth’s ecosystems than ever, humans are more at risk than ever when those systems change as a result of their actions. If humanity as a species was truly incontrolof the natural world, it would not have chosen to create the Anthropocene.

    In other terms, if we accept the idea of Anthropocene, we must accept that the “human” identity as constructed must be one that incorporates itself into a wider view of nature and the planet as part of one earth-system. Therefore, the natural world cannot be construed as “other” through either romanticism or rationalism. This view of an ‘earth-system’ is one that is important for the history of childhood as it understands the planet as a ‘unified, complex, evolving system beyond the sum of its parts’, which is a view that presences and places importance on historical actors that have otherwise been deemed negligible.

    The Becoming World

    Understanding how the Anthropocene highlights environmental and ideological faults that underly contemporary and historical perceptions of child and nature will allow historians to construct revised narratives of childhood and environment alike. This means acknowledging and exploring how children act upon their environments both personally and extensionally through the ‘nature-culture hybrids’ of the societies they inhabit. Interdisciplinarity will be key to unlocking such stories as the chimera of the Anthropocene requires the expertise of geographers, biologists, earth-system scientists, amongst many others, to fully interpret. At the same time, as Andreas Malm and Alf Hornborg point out in The Geology of Mankind?, such professionals are not necessarily knowledgeable in the study of human relations with the planet, ‘the  composition  of  a  rock  or  the  pattern  of  a  jet  stream  being  rather different from such phenomena as world-views, property and power’. Only studies that take an interdisciplinary approach will have the capability to understand what childhood means in the Anthropocene in both environmental and humanist senses.

    Adopting methodologies of new materialism has proved one of the most popular styles of exploration in this category, much of the work so far for which has come from anthropology. Away from the conceptual, the Anthropocene Epoch reminds us perhaps first and foremost of the tactile, how much the human relationship with the planet is one based fundamentally on physicality before philosophy. Where children are concerned, their physical interactions with their environments are based on specific wants that substantially differ to those of their elders, most evidently the wants of play and exploration. Children are therefore inhabitants and engineers of unique environments and relate to “adult” spaces in unconventional ways; they ‘interactively embody their surroundings through play’ as Kirsti Pederson Gurholt describes in Curious Play. This includes particular interest, born of novelty and of these divergent wants, in aspects of the natural world that adults show less toward. An attraction to death and dead animals is one common current that runs through materialist analyses of childhood, from Pamela Riney-Kehrberg’s description of the bouncy-castle horse carcasses of New York’s city streets to Eduardo Kohn’s accounts of the spoils brought home from the hunts of the Quechua people that offered little interest to the adults but garnered much from the children. The presentation of unidealized accounts that genuinely examine children’s relationships with the material world, rather than those that others have conceived for them, works to undermine misleading enlightenment conceptions of childhood that would have them repulsed and disconnected from the “unnatural”. In the Anthropocene, where the objects and materials humans accumulate and throw away have come to be powerful agents of environmental transformation, we are required to challenge  ‘deeply rooted cultural oppositions such as animate versus inanimate and active versus passive’ that ignore the materiality of the planet and of children’s lives.

    Beyond objects and materials, the landscapes of childhood are equally important to recognise as ‘inherently pedagogical contact zone[s]’, meaning a recognition that all environments are environments of learning. Young people are often drawn to the abandoned and the secretive over the idyllic, spaces such as a den or old factory where you can “make your own fun” proving to be more intriguing propositions than deliberately constructed environments such as playgrounds or youth centres. These spaces that children choose to inhabit, which generally fall outside of the “adult world”, allow them greater freedom and a more authentic pedagogical relationship; such relationships that will go on to be instrumental in their adult attitudes towards particular environs. As Gibson and Graham write in A Feminist Project for Belonging in the Anthropocene:‘The Anthropocene calls to us to recognize that we are all participants in the ‘becoming world’, where everything is interconnected and learning happens in a stumbling, trial and error sort of way’. The “human age” asks us not only to consider what the environments of childhood can teach us about environment and childhood, but also what they can teach to each other. Indeed, it asks us to reframe and presence spaces of childhood in the historiography that have been deemed before as “abandoned” or ahistorical. 

    If we embrace the spatiality of children’s environments we gain appreciation of children (and the natural world) as ‘social actors who are enmeshed in richly diverse social worlds’ rather than ‘separated out, disconnected individuals understood solely through developmental needs and discourses of rights’. As a global phenomenon the Anthropocene touches all human lives to greater and lesser extents and does not do so proportionately toward those groups of people who have most influence over it, thus drawing distinction between “children’s environments” and “adult environments” as separate entities is unhelpful. If the Anthropocene does not confine itself to the adult domain and we cannot confine our studies so either. It pushes us to consider the construction of our built and landscaped environments more carefully, with greater sensitivity to how children will know, sense, touch, and exist in them. As Karen Malone concludes in Children in the Anthropocene, only with an appreciation of spatiality can we ‘acknowledge how it is to be child with a host of others and the potential differences… their ‘acting’ as an ecological collective can have on the ecosystems of the planet’.

    However, the relationship between environment and child as an element of wider earth-systems extends beyond the material. All historical agents act extensionally upon their environments through how other agents act toward and around them, and this holds especially true for agents such as children and the natural world that are perceived in wider society to lack agency for themselves. Holding to enlightenment form, whether construed as ‘wayward, chaotic and disordered’ or ‘pure, innocent, and in need of protection’, there is a sense of need or even duty to act for them rather than with them. In the Anthropocene, the concept of what childhood means and has meant is changing.  Childhood is seen as being under threat in a way that other human life-stages cannot be, the perceived symbiosis between child and nature being so strong that the threat of Anthropogenic climate change to the natural world is naturally a threat to childhood also through cultural severance. At the same time children are cast as the saviours of the planet and symbols of environment(alism) in a process that Peter Hopkins and Rachel Pain call ‘fetishising the margins’. The natural world is simultaneously vulnerable and dangerous, especially for children who are ‘inherently more sensitive’ to its hazards in both physical and psychological contexts. The Anthropocene invites us to consider multiplicities within childhood and environment that previously were singularities. As Alan Prout writes:

    ‘the singular universal and naturalised category of childhood [should] be replaced by childhoods understood as dynamically configured, diverse and entangled assemblages of natural, cultural and technological elements’

    The study of childhood brings a focus to Anthropocene studies of smaller more intricate environments, where the tendency is often toward grander overarching histories of ecosystems and global networks. It asks us to consider how environment is presented to children, what narratives are taught through our stories and schooling about the natural world and how do those influence us in adulthood. The Anthropocene also asks for a reappraisal of the narratives that adults tell themselves about childhood and environment, particularly those of nostalgia that idealise or demonise certain types of youth, as these reflect ‘anxieties about social and economic change and its impact on the child, and the individual sense of identity and belonging, present in everyday life.’ The study of how children’s lives are changing in the Anthropocene era is an important undertaking, but the conceptual framework this provides can also be used to study the history of childhood, and tell new stories that presence the child and their environments on their own terms.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 2nd of December 2020


  • Anthropocene Ecologies of Nutrition and Economy

    The Ethiopian drought of 1983 to 1985, that lead to a famine which left 1.2 million people dead and 2.5 million displaced, was not an act of God. This time something was different, or something was different in the way people were beginning to view what environmental disasters meant in the “modern age”. More specifically, what they meant in an age where the activities of humanity were becoming increasingly inseparable from those of the rest of the natural world. Questions were being raised as to why, as James Verdin et al. ask in Climate Science and Famine Early Warning, this drought had come so quickly off the back of several other droughts the region had endured in recent years, out of step with existing cycles of Ethiopian climate that typically saw drought once a decade? Droughts occur when high temperatures increase the rate of ‘evapotranspiration’, where water is lost from soil and the flora it supports. This also leads to wildfires, such as those seen in Australia in 1966, 1993, and 2019. Changes in temperatures also affect rates of rainfall, as they influence air and ocean currents making dry areas drier and wet ones wetter, meaning plants that would traditionally grow in that region will no longer do so. As food is an integral element of culture, loss of these traditional foodstuffs then damages affected peoples on a societal as well as an economic level.

    Contemporary climate scientists suspected that the Ethiopian droughts were not “natural” in origin, at least in the traditional sense of the word. Instead, they later proved that they were a consequence of a tropical rain belt that globally had been consistently pushing southwards over the course of the latter half of the 20th century, leading to decreased rainfall all across the southern Sahara. Furthermore, the reason for this shift southwards was directly attributable to human activity. In an article for Geophysical Research Letters in 2013 Yen-Ting Hwang et al. explain how the release of sulphates into the atmosphere via the burning of coal in Europe and North America had been the ‘primary cause’ for the ‘aerosol cooling of the Northern Hemisphere’ and the subsequent change in global weather patterns.

    Since 1985, an increasing number of “natural” disasters have been ascribed, at least in part, to human environmental influence and as a result historians have increasingly been looking backwards with a more critical eye toward connections that could be drawn between what previously has been viewed as the separate disciplines of “natural history” and “human history”. Amongst this discussion, one term has come to embody this newly perceived relationship, Anthropocene; the age where, as Paul Crutzen remarked on the turn of the millennium, ‘the global effects of human activities have become clearly noticeable’. In this new state of affairs the human had taken position as an integral element of the planet’s climatic, aquatic, and other earth systems. But how has the “human age” degraded nutritional and economic ecologies? And why did humans create the system of global, industrial, capitalistic agriculture that is primarily responsible for it?

    Ecosystems of Ideology

    Understanding ideology, poverty, and famine as biocultural aspects of the same system is essential in the Anthropocene. When pressure is applied to one part of the system, the others will be affected, be those pressures economic, psychological, or nutritional. In 1993 India saw flooding that killed 530 people and destroyed 1.2 million acres of crops and other flora. To understand this, an approach is required that highlights three main factors of environment: the biotic, the abiotic, and the cultural. The biotic factors are those that include the biology and health of humans, but also those of the other life forms that occupy the same environment. Abiotic factors involve the geography and climate of the environment, and cultural factors are manufactured anthropogenic elements that include ‘such phenomena as world-views, property and power’. Understanding each of these factors in any given environment requires understanding of the other two. In other words, they must all be seen as variables of the ecosystem. In this case, increasing pollution in the atmosphere was responsible for creating a more intense greenhouse effect that lead to greater oceanic evaporation and more rainfall, followed by these floods that left millions of people homeless. The flooding had also caused soil erosion that further impacted agricultural productivity during recovery and therefore increased nutritional stress. In this way the cultural (or ideological) drive of industrialisation in the market-economy acted as a force of oppression via abiotic and then biotic factors.

    Often such relationships are non-obvious, such as in how the mineral content of a community’s water will be based on the composition of the rock into which they have dug wells. Those with less natural fluoride available in their water, such as the people of Huila province in Columbia, for example, will have higher rates of tooth decay and thus more nutritional complications. On the other side, those with too much fluoride can suffer ‘enamel fluorosis’, impairing tooth development when young. Such environmental inhibitors can range from the relatively insignificant, as with the fluoride content of water, to far more serious, as with life expectancies in northern china reducing by 5 ½ years compared with the south from 1950 onwards. Yuyu Chen found that because of a cooler climate, air pollution in northern China had become considerably higher than in the south, due to the burning of more coal, and this is by a large margin the deciding factor in the increased rate of cardiorespiratory deaths in the region.

    However, whilst the recent Anthropocene has seen a greater prevalence of human-induced environmental change, the scientific consensus is that humans have been contributing to the warming of the planet for at least a century, and historians note that humanity has been shaping the planet’s ecosystems long before that. As Robert Goodland and Jeff Ahang explain in Livestock and Climate Change, measurements of rising undersea and atmospheric temperatures that fall far outside of what could be considered normal variability, are solid evidence of the impact that industrial agriculture and power production has had in producing global warming. This phenomenon has polarized the earth’s climatic zones, shuffling them north in the northern hemisphere and south in the southern hemisphere, leading there to be the most dangerous changes in areas that were already “on the edges” of these zones. These areas most affected, being those of more extreme conditions, are typically those where marginalised peoples, flora, and fauna exist who have more specialist lifestyle requirements, making these changes all the more devastating. This is why, according to the UN, 99% of deaths attributable to climate change have occurred in developing nations. As David Ciplet et al.explain in Power in a warming world, the advent of the Anthropocene has only increased the ways social inequality and poverty translates into poor health, through factors such as increases in stress hormones, exposure to dangerous toxins, and diminished access to healthcare. Those living in poverty are more likely to live near toxic sites, such as residents living alongside the oil fields of the Niger delta, 60% of which say their health is being affected by air, water, and land pollution. As fossil fuels become scarcer, methods of extracting them become more inefficient in terms of both energy use but also other resources such as water. Techniques such as hydraulic fracturing need lots of fresh water to operate, and pollute local water cycles, thus competing for the valuable resource with growing human populations.

    Anthropocene Ideology

    For historians, “Anthropocene” is not only the recent epoch however, but a framework of methodology, studying the history of environmental eventsin the context of Anthropocene. To this end, the material culture of civilisations is an essential source. For example we can use palaeopathological methods to look at the skeletal structure of Mayans from the 6th to 10th centuries, as William Haviland did in the 1960s, and note that people were growing shorter over time. Simultaneously we see fewer animal bones in the record, indicating a reduction in food availability. At the same time, however, those skeletons of the elite that were entombed did not change in size, showing us how the nutritional stress on the population had both environmental and societal influences. This historical practice of ‘medical ecology’ is still an emerging discipline and requires historians not only to appreciate which environmental factors affect a person’s health, but how those consequences then tie back into the earth-systems they inhabit. Famine and poverty exist in cycles, or spirals, that create the conditions for their own continued deterioration.

    Anthropogenic climate change has contributed to the cycle of poverty by putting excess stress on individuals and communities that did not have the resources, be those economic, political, agrarian or otherwise, to cope with the change. For example, as human settlements were increasingly built, or climatically moved into, areas of the world that previously were too hot to be hospitable in the 20th century, such as in Saudi Arabia, more and more people moved into housing that was designed explicitly with air conditioning in mind. However, problems arose for people living in poverty who could not afford this “convenience” and had their economic and physical health threatened as a result of what is called ‘cooling poverty’. To use Nancy Romero-Daza’s term for describing the relationship between violence, drug abuse, prostitution, and HIV/AIDS, the relationship between ideology, poverty, famine, and environment is syndemic. They are ‘not simply concurrent problems, but rather constitute a set of mutually reinforcing interconnected epidemics’. For example, when in 1998 the Nipah virus broke out in Malaysia, all those who suffered it were not Malaysian but Chinese. This was not because the Chinese were biologically vulnerable compared to the rest of the population, but because they were the people who provided cheap labour on the pig farms where the disease originated. Given this context, an even more appropriate term to describe the relationship between poverty, famine, and ecological degradation in the Anthropocene would be ecosyndemic, an idea that places emphasis on environmental factors in the creation of poverty and famine in the 20th and 21st centuries.

    Experiences of poverty and famine are ideological, in that they exist within a set of power relationships between those experiencing them, and those not. As Ann McElroy and Patricia Townend describe in Medical Anthropology in Ecological Perspective: ‘Sicknessis a social category – the sick role in a particular society, the way a person who is ill is expected to behave’. Looking at poverty and famine in the Anthropocene, these relationships are more complex. If the whole world is “sick” in the Anthropocene, who plays the doctor? With the 1983 Ethiopian drought, it was the nations that had caused the disaster to begin with, inadvertently or otherwise, who acted as healers by providing financial support through the “live aid” event. In these western nations, mid-20th century optimism, particularly in America, was typified by an ideological belief that technological advances in nuclear energy, antibiotics, and agriculture with the “green revolution” would be able to create a world without poverty, disease, or hunger. Such ideas were strongly rationalistic, based on presumptions of humans being able to control their environment through technology; but the famines, epidemics, and other “natural” disasters of the late 20th and early 21st centuries speak otherwise. The environmental consequences of Anthropocene ideologies have served to undermine them and have brought forward new arguments that suggest that poverty and famine can only be addressed when working within the frameworks of the natural world rather than over them.

    Looking back into our history, the Anthropocene conceptual framework pushes us to question where human decision in relation to the environment played a role in causing poverty and famine. Peoples who chose to transition from hunter-gatherer into settled agricultural societies, for example, created an environment of living that caused new types of disease to evolve from closer contact with animals. Clearing land made new breeding places for mosquitoes and digging irrigation ditches more homes for the tiny parasitic worms that cause bilharzia and therefore anaemia. As Mark Cohen and Gillian Crane-Kramer write in Ancient Health, skeletal records of early agricultural societies show ‘increased nutritional stress’ compared to their hunter-gatherer contemporaries who had a more varied and flexible diet and were thus less susceptible to famine. Foragers, having diets of higher energy and greater variety, were also far more unlikely to develop weaknesses in their nutrition, unlike settled agricultural peoples. With the emergence of the city environment in the historical record, Cohen and Crane Kramer also note how cultural differences between peoples begin to play more important roles in their lifestyles. Indeed, the general trend identified in societies from the advent of agriculture to the modern day has been an increase in social stratification related to changing economic conditions based on environmental factors. Cases of poverty and famine in the Anthropocene follow this trend, being most acutely felt by specific groups of people on lower levels of social strata. Such experiences in a society become less universal and more variable based on a person’s specific place in the nature-culture order.

    The great Chinese famine of the late 1950s and early 1960s, as Dali Yang notes, is one example of famine that is considered to be predominantly of “man-made” causes, as in being the result of ideological policy decisions that caused falls in food production. In this example the connection is clear between human decision and human suffering, but because in the Anthropocene poverty and famine are often more abstracted from their ideological causes via environment, it is harder to conceive of and draw lines of causation. Indeed, the Anthropocene leaves little room for policy makers to blame “natural disasters” for crises, as the Chinese establishment did. What makes this more difficult is that the decisions behind anthropogenic climate change which have contributed to famine and poverty in modern history were generally made in countries different from those which were harmed. As René Dubos wrote in Science and Man’s Nature, often the most widespread impacts of ecological stresses are not the events themselves, but the social organisation and behavioural traits of the societies that surround them.

    Minority Ecologies of the Anthropocene

    Whilst peoples who live outside, or even tangential to, globalised consumerist economies make up a fraction of the earth’s human population, they are responsible for managing c.25% of the world’s tropical forests. For these people, even if the Anthropocene does not bring full famine or poverty, it can still bring aspects of those things. Human bodies require a complex array of nutrients to operate efficiently, which are obtained in different ways with different cultures around the world. For example, because of the environment of the South American rainforests, where the tropical climate results in acidic soil and therefore the plant life being low in nutritional value, there isn’t an easy supply of animals available to provide protein for a person’s diet. Therefore in many indigenous diets where meat has been uncommon, protein was instead obtained by a mix of maize and beans, that complement each other when eaten together. In the 1960s these two staples began to be upset by consumerist economies of the Anthropocene that resulted in less healthy and nutritious diets, if not hunger. This pattern also holds true for the peoples of the Kalahari Desert where there has been a steady decrease in food obtained from hunting and foraging over time, being replaced with imported commercial products such as cornmeal and sugar. Where at the start of the 20th century almost all of their nutrition came from wild sources, by 1980 this figure was down to 20%. This came with benefits and disadvantages, such as in seeing an increase in nutritional deficiencies but also a decrease in infant mortality due to a greater availability of milk. This is exemplary of how, in regards to food production, the Anthropocene influences not only the production of food in different environments, but also the way it distributed throughout a given economy and the way it is culturally viewed and prepared.

    Hunter/gatherer societies’ food sources are not only put under threat from climate change, however. Indeed, according to the International Union for Conservation of Nature (IUCN), habitat loss has been the primary reason behind extinctions and endangerments of mammals, birds, and amphibians around the world, and whilst climate change is a contributor to habitat loss, most of this has been the result of more physically immediate human action through the remodelling of landscapes for agricultural land. This decrease in habitat size has thus reduced the number of plants and animals from which to gather.

    In contrast to foragers, people worldwide who live as subsistence farmers (approximately 75% of the world’s 1.2 billion poor) have diets that depend on one specific staple. In temperate European and Asian climates, this has been wheat. For Africa millet or sorghum, south and Southeast Asia rice, and maize for the Americas. Economies built in this style are able to produce more food than hunter/gatherers due to the use of more intensive agricultural techniques but are bound to that single staple’s limitations. Such peoples have been more vulnerable to a lack of certain vitamins or minerals, and more likely to have seasonal hunger incorporated into their farming routines before a harvest.  This means that should their resources fall prey to blight, drought, or another limiter, the entire system collapses, leading to famine. This occurred in Kenya in the 1980s when economic pressures of Anthropocene markets caused an increase in poverty and malnutrition because pastoralists were not able to sell their produce at sustainable  prices. In the Anthropocene, this is becoming more common, and so this style of subsistence is becoming less viable, as with hunter/gathering. Traditional cuisines around the world have been thus put under threat in the Anthropocene via ecological and also economic contributors. Local foodstuffs are highly cultural and so this represents societal damage, but they are also the ends of a process of adaptive selection over time whereby different culinary combinations have been trialled and those best suited to the environment chosen. Changing these diets without consideration to local particularities, therefore, is more likely to be a force of harm than help.

    The Anthropocene has created the environmental conditions which make methods of subsistence other than that which birthed it, global industrial commercialised production, more difficult to maintain. Via the limitation of wild spaces for foraging through habitat destruction, and the increasing of “natural” disasters that decimate traditional farming techniques, those older forms of food acquisition are attacked and diminished. The reason for this is that the majority of the energy utilised in industrial agriculture does not come from human labour, only around 5% compared to its alternatives that use around 90%. Instead it draws on energy sources such as coal and oil that produce high amounts of pollution. Industrial agriculture is vulnerable to the same threats as subsistence agriculture, namely the cultivation of monocultures that are susceptible to disease, but this is remedied with the use of pesticides which in turn harm the health of the humans and ecosystems they come into contact with; for example when pesticides of the neonicotinoid group caused ‘colony collapse disorder’ in bees in Massachusetts in 2013; this in turn damaged the whole ecosystem due to the loss of pollinators, including hurting the yields of the farmers who use those chemicals. Furthermore, such crops have less nutritional content overall compared to crops grown without the use of pesticides or glyphosate herbicides. Indeed, the use of these fertilisers ultimately degrades the soil and water resources of the ecosystem to such an extent that it becomes unsustainable, especially in an environment of increased drought, and contributes to the growth of cyanobacteria via runoff into bodies of water. This bacteria flourishes in excesses of nitrogen and other chemicals that fertiliser provides and in turn harms the health of aquatic life and humans who drink the water, eat products of that water, or use it for recreation.

    Anthropocene Societies

    The nutritional and economic stresses of the Anthropocene are not only found in traditional agricultural and foraging environments however, although these are the most drastically affected. In America, for example, at least 15% of households suffer from nutritional stress and food insecurity despite availability of charities and government programs. It is also the case that these same households are prone to obesity, because both conditions relate to a lack of healthy food in a person’s diet, rather than a lack of food overall. Those societies most responsible for creating the Anthropocene, the same as those who have profited on it, have seen a shift toward meat as the prime source of protein in their diets, and therefore increased intakes of animal fats. Alongside this, people living on ‘supermarket diets’ see increased intakes of high-glucose and fructose foodstuffs which are associated with obesity and several chronic illnesses including diabetes, various autoimmune diseases, and kidney inflammation.

    Industrial meat production chains that have used antibiotics on animals in order to keep them in closer quarters have built immunity in the pathogens they were targeting, thus making those illnesses harder to treat in human populations. Additionally, estrogenic chemicals used in the production of plastic packaging in industrial food networks, and those used on animals and crops, are suggested to promote the growth of fat cells in the body. These chemicals disrupt a human or other animal’s endocrine system, meaning various hormones that are used to regulate fat growth can be confused. It is also likely that such defections can be passed down genetically to children of parents who are exposed to these chemicals pre or postnatally. At the same time, the inspections process of fresh produce has not been able to keep up with production or global distribution, and so foodstuffs contaminated either by bacteria or pesticide have been more likely to reach cooking pots. This has made the task of tracing contaminants back to their source more difficult.

    As Anna Bellisari describes the phenomenon in The Obesity Epidemic, obesity is: ‘the predictable outcome of the highly evolved human metabolic system functioning in an obesogenic environment created to fulfil the American Dream’. This highlights how the cultural aspect of an environment can fundamentally impact the biotic and abiotic. Supermarket societies of the 20th and 21st centuries that have seen obesity epidemics have done so primarily because their economic and political structures created a gastronomic environment wherein unhealthy foodstuffs were promoted because they were more lucrative than healthier alternatives. Additionally, the promotion of the idea that a person’s fatness is based wholly on individual choice as supposed to also being influenced by structural elements of the environments of nutrition and health that they live in, has allowed the continuation of this structure and the growth of other industries surrounding diet and fitness.

    The increased yields typical of industrial agriculture also sometimes translate to less or worse nutrition for people. In central American nations, where the production of products such as beef increased dramatically during the 1960s, the consumption of beef in that area of the world has went down, ranchers themselves having to ‘pay more for less’ as the food was exported to other markets around the world. These globalised markets of industrial societies have also widened the scope of the impacts of food crises such as droughts, particularly for those living in poverty. A poor harvest in one key area of the world can have devastating impacts for the nutritional health of people the world over, such as with the millennium drought in Australia (1996 to 2010) that caused a spike in grain prices and led to deaths and protests in over 50 nations around the world.

    Recent history has challenged historians to change the ways they think about phenomena such as ideology, poverty, and famine. The Anthropocene’s ecosyndemic systems of nutrition and economy can only be grappled with a holistic understanding of each existing as nodes in a network of causality that is often non-obvious and multidirectional.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 4th of August 2020

  • Nature as Identity: An Environmental History of Bristol’s Civic Improvement Societies

    INTRODUCTION

    When the ash had settled at the end of the second world war, Bristol emerged from it quite the different city to how it had entered at the beginning. Bombing had destroyed many buildings such as the Jacobean St Peter’s Hospital and St Peter’s church as well as over 80,000 houses and the question was now raised over what would fill that vacant space. An opportunity for significant development had been created. Over the coming years, and particularly from the mid-1960s onward, the shape of the new Bristol began to form, and as structures populated the empty lots many Bristolians found themselves taking issue with the environment they now found themselves living in. What resulted was an explosion in the creation of what might be called “civic improvement societies”, concerned citizens forming themselves into groups that sought to improve, in their own view, the environment of Bristol. In wider society too was a growing environmentalist movement with a global perspective that was influencing how people conceptualised their relationship with nature and the environment. The central questions being asked in this essay is: to what extent were the reasons behind the creation of Bristol’s civic improvement groups based in their understandings of nature and the environment? And what were those understandings?

    This essay will examine four of the most prominent of these societies at their inception, the ‘Clifton and Hotwells Improvement Society’, the ‘Bristol Visual and Environmental Group’, the ‘Bristol Life-Style Movement’, and ‘Bristol Friends of the Earth’. Each of these, whilst being similar in many of their aspects, held distinct ethos’ and approaches to their action and had unique understandings of and relations to nature and the environment. Some attention will also be given to two other important developments during this period, those of Windmill Hill city farm and the proposal for the Avon Gorge hotel, which demonstrate in further multitude of approaches towards the environment in Bristol during this period. Broadly this essay finds that the societies fall into two categories, those with a globalist perspective versus those with a localist, but within these categories there is still much variation. Ultimately it finds that each of these groups conceptualised of nature and the environment in different ways and, with the exception of Bristol Friends of the Earth, not in a manner that would be familiar to contemporary environmentalist movements. The drive behind their creation was more greatly influenced instead by other factors that were tied closely to the rationale of environmentalism; a desire for aesthetics in town planning, an internationalist philosophy and an individualist rejection of modernity. Overall however the most influential factor that sustained these environmental groups was a fear over the loss of community and identity within Bristol, and the natural world acted as a catalyst around which such sentiments could form.  

    Methodology and Historiography

    The primary source base for this essay are the records of these organisations that are kept at Bristol Archives, their constitutions, the minutes of their meetings, their newsletters, correspondences, leaflets, and posters. These sources have been especially useful in giving a view of both the public and private sides of these organisations which allows for an examination of how much ideas around nature were used as promotion to the public as compared to their prevalence within the organisations themselves. As the sources are only either intended for the public or for official use they do lack personal material on the opinions and understandings of the individuals who made up these societies themselves, the picture given instead being one of the organisations’ approaches in a totality. An oral history would be useful in this area as an avenue for further exploration. Therefore, these sources are being used to examine the philosophies and approaches of these groups as collectives in regards to how they understood nature and its place within their goals for “improvement” as well as in how they utilised concepts of the environment and nature to achieve their goals.

    The historiography in this very specific area has been light, the only works of note provided being incidental histories provided by the societies themselves, those that still exist today. These autobiographies were intended for general interest as supposed to academic and cannot be called impartial, nor are they attempting to be. On a national level several academic works have considered the origins of modern environmentalist movements such as David Pepper’s The Roots of Modern Environmentalism that charts the influence of romanticism, Darwinism, and socialism in the 19th and 20th centuries in forming environmentalist philosophy. Nick Crowson et al.’s NGOs in Contemporary Britain: Non-state Actors in Society and Politics Since 1945 explores this in a more modern context, explaining how the rise in environmental science bolstered and altered the arguments of these organisations during the mid to late 20th century. Donald Worster’s The Wealth of Nature: Environmental History and the Ecological Imagination lay much of the foundation for works such as these in 1993 where he explicitly tied the practice of environmental history to environmentalism in the present day. However, where these texts are have focussed on the emergence of environmentalism nationally and internationally, this essay will examine the phenomenon at a local level, within the context of one city.

    CHAPTER ONE

    The Localists

    The Clifton and Hotwells Improvement Society (CHIS) was founded in 1967 by a group of residents who wanted to, in their own words ‘keep Clifton’s character and charm intact’ in an age of ‘industry, transport, and new housing’. Reading some of their more recent literature may give the impression that concern for nature and the environment was a primary factor in the society’s foundation, but this is somewhat of a misrepresentation. In their leaflet from 2008 40 Years of CHIS they wrote that ‘the natural world is not merely an add-on to human activity, but an essential core of it’ and in regards to their history they imply the group was founded on similarly strong principles. Therein they wrote that they ‘like to see’ the people who campaigned against constructing housing over the Leigh Woods area in the early 20th century as ‘the precursors’ of their organisation, but it was George Wills who was responsible for donating Leigh Woods to the national trust, not the CHIS.  Whatever the society has become today, at its inception the CHIS was not very interested in environmental action, though that is not to say they didn’t take any interest in the natural world. Rather, the group was more invested in the aesthetic of nature, an aesthetic with which to complement the historic Georgian and Victorian architecture of the area and with which to combat modern constructions.

    In their original constitution of 1967, they state their intended aim as to ‘encourage high standards in architecture and town planning’ with no allusion towards conservation or ecological concerns. As compared to the other civic improvement societies examined in this essay the CHIS took the most conservative approach in this area, with their primary interest in environmental management being as a tool towards making the place they lived in feel civilised and friendly. Their key purpose was the conservation and preservation of the existing urban townscape. Nature and the environment did hold a place in this vision however, as green spaces were intrinsic to the Georgian and Victorian architecture of the area, as such the CHIS demonstrated in their meetings and correspondences a good deal of concern over the state of residents’ gardens which they resented being turned into spaces for car-parking, calling these ‘bald front gardens’ a ‘scar’ on the area. They sent many letters congratulating or criticizing businesses and individuals on the state of the gardens such as to the South West Electricity Board complementing them on some ‘well identified planting’ around their new sub-station. However, they were happy to see greenery removed if it improved the aesthetic of an area, writing to city engineer J B Bennett that some members had been ‘distressed’ by the idea of removing trees along Buckingham Place but felt it was ‘overall good for the frontage’.

    In terms of civic space they were also keen to preserve communal green spaces, having them declared as town greens to prevent them being built upon. However, the minutes of their meetings convey that their principal purpose for this was to halt the ‘threat [of] new housing’ that would not match the aesthetic of the area as supposed to valuing the land as green space in and of itself. However, the practice that was perhaps the most emblematic of their approach towards nature was what they called ‘putting an order’ on a tree, singling out individual trees to be preserved because of their pleasing positioning and appearance, such as when writing to Mrs Wilton in 1975 to ask her not to cut down the tree on her front lawn. This demonstrates the specificity of the CHIS’s environmental considerations, the focus on the minutiae. They were not contemplating nature as an ecological system in need of the same level of conservation as the buildings of the area, more as a useful array of decorations that could be tactfully applied to improve the character of Clifton and Hotwells.

    The philosophy of the CHIS very much spawned from the local environment of the area, but from the urban environment as supposed to the natural. As the society developed it did begin to consider nature in a different light, seeing it not just as having aesthetic benefit, but also a recreational one, their 1977 constitution altering their aims to also include the ‘provision of facilities for recreation and other leisure time occupations… in the interest of social welfare’. Still here however, the CHIS was interested in what the local ecology could do for them rather than what they could do for it.

    Bristol Visual and Environmental Group

    The Bristol Visual and Environmental Group (BVEG), founded in 1967 (one year before the CHIS), took a far more radical approach as compared to the CHIS. Their remit took in the whole city as supposed to a single area and they paid particular attention to public transport as an alternative to the car. They argued that the development of large modern blocks and high-density road networks was damaging the character of the city as a whole and was impacting the environment in the sense of that for the humans who lived there. One of their most common calls was to keep the city at a ‘human scale’, writing to the council in 1970 berating ‘the old city losing all its atmosphere of a medieval walled town, the prevalence of ugly car-parks and offices, [and] the vanishing interest in gabled or curved roof lines’. They were also concerned with tall buildings breaking up the terraced nature of the city as it sat on the hill, meaning they argued that buildings higher up the hill should be able overlook those beneath them. One thing that set the BVEG apart from the CHIS was its level of political engagement, frequently writing to and criticising the city council as well as putting out argumentative leaflets and newsletters and even forming a policy in 1971 to ‘name and shame’ urban planners who damaged the integrity of the city as they saw it.

    In its language the BVEG demonstrated a desire to tie itself to environmental discussion and spoke out strongly on green issues. In their public newsletters during the early 1970s they wrote that the council should ‘think hard before they develop more housing on GREEN OPEN SPACE’, recognising the inherent value of natural spaces. They were also very concerned with the environmental impact of the car on human health and the health of the city, writing that the council should instead ‘encourage the use of GREEN TRANSPORT using WATER AND RAIL’. Ultimately however the group proved more interested in the ‘visual’ portion of their name as supposed to the ‘environmental’. Their use of “green language” was more an attempt to capture the zeitgeist than to follow it. They called for the creation of conservation areas, but only across urban spaces as supposed to natural. They thought the council shouldn’t build on green spaces, but because ‘there is so much derelict land available in the city centre’ as supposed to a wish to preserve the spaces themselves. In a newsletter in 1972 they wrote that the group’s aim was ‘to preserve the historic and unique character of Bristol… these aims being consistent with conservation and the prudent use of natural resources’. The order and wording of these objectives is important as they demonstrate that the group was primarily concerned, similarly to the CHIS, with the architectural character of the area, however they recognised that these goals could be ‘consistent’ with environmentalist causes.

    Where the CHIS was interested in the aesthetic of nature and the environment, the BVEG was interested in the language of environmentalism, utilising it to further the appeal of their arguments to the public. In actuality the BVEG and the CHIS had very similar agendas, but the more political approach that the BVEG took on those issues led it to be more attentive to, and able to capitalise upon, political trends. The group’s central message had always been that ‘Georgian Bristol [was] under threat’; what they recognised was that the things that threatened it: roads, cars, and tower blocks, were the same things that environmentalists were concerned with as well.

    These two societies were localists, in that they were drawing influence from, and looking for change to, local causes. Their concern lay with the identity of Bristol as a city in the post-war context. In the present day this image is often tied to nature, the environment, and environmentalism however at the conception of these groups in the sixties and early seventies the natural world was not prevalent in their philosophies for civic renewal.  For them the “local environment” for which they sought change was predominantly an urban one and the task at hand was one of town planning and the encouragement of architectural rigour. The natural world held a place in that schematic, but an ancillary one; it was not a driving factor for their activism. From the localist standpoint, it was the buildings of Bristol that had been destroyed in the war, and it was the building in Bristol that was destroying the city after it. Thus, they did not conceive of the environment as other societies with more international agendas might. Bristol was not a piece of the puzzle, it was the puzzle, and the natural world simply did not fit in as a key concern, indeed, the city generally is a place where it is of the least concern.

    CHAPTER TWO

    Bristol Life-Style Movement

    What of organisations that held more internationalist attitudes? As alluded to at the close of the first chapter, this essay finds that the globalists incorporated concerns over ecology and nature more deeply into their core philosophy. However, this did not necessarily translate to stronger advocacy for environmental causes, as is the case with the Bristol Life-Style Movement (BLSM). The philosophy of the BLSM is perhaps the most interesting of those explored in this essay as it was simultaneously global and individualistic, radical and conservative. The group was a Christian organisation that stood against much of what the “modern world” had brought them in the sixties and seventies. They wrote to their members of ‘the myth of progress’, advocated for ‘freedom from the consumerist rat-race’, and were fond of quoting Gandhi: ‘there is enough in the world for everyone’s need but not enough for everyone’s greed’. They were disillusioned with capitalism and saw the environment as key casualty of it, they wanted a return to a simpler life, a life that was closer to God.

    The BLSM saw their action as part of a wider struggle for ‘global justice’. They supported increases in foreign aid, the boycotting of South African goods over apartheid, and the attendance of the ‘fight world poverty mass rally’ in 1985. They framed these as environmental issues, saying in their promotional material that ‘peace means sharing of resources’ and that ‘conservation is survival’. This was a fusion of environmentalism with traditional Christian causes; destruction of the environment was a large scale issue that, they argued, required a coming together to address. In other words, the environmental threat was a single, global, unifying factor that underlay many other causes the group cared about and acted as a new reason to spread their charity and faith. The environment of Bristol was inextricably linked to that of China or Lesotho and people in the west needed to be made aware of ‘the environmental destruction on which their standard of living depends’. In these ways the group was more radical than the CHIS or the BVEG, as betrayed in them calling themselves a Life-Style ‘movement’ rather than a group or society.

    However, the official documents of the BLSM reveal that practical action they advocated be took was not nearly so radical as their language and ideology in their promotional material. They stood against global ecological devastation, but they did not ask for societal or systemic change in order to combat it, instead they focussed on changes in individual lifestyle choices. Their slogan, was ‘live simply that all may simply live’ and they asked of their members that they ‘commit themselves to a moderate lifestyle as a personal contribution to the conservation of our planet’. Whilst they did conceive of nature as a transnational entity, their means of conserving it was very much local, even more so than the CHIS or the BVEG, who focussed on city-wide change as supposed to individual. They encouraged cycling and buying organic, discouraged driving and buying ‘wasteful packaging’. It was in this sense that the BLSM was globalist yet individualist, their message was that change for the planet was tied to change within yourself. The lifestyle the BLSM was encouraging, the principals of simplicity, self-sufficiency, and community held nature at their core. The natural world was God’s creation, and the ideal was to live in harmony with it as much as possible. The world of office blocks, plastic bags, and the newly built M4 motorway was the antithesis of this.

    Indeed, the ecological philosophy of the group also incorporated a wider rejection of authority and intellectualism. In their 1981 newsletter, they wrote that ‘the great technological age is making us more clever, but perhaps less wise’ and that ‘we are allowing the “experts” to organise too much of our lives’. In essence they framed “getting back to nature” as an escape from modernity, both its damaging physical attributes and its societal ones, writing in 1974 that the actuality of ‘cheap fuel’ was inseparable from the ‘gluttony’ of the society surrounding it. They saw themselves as alternative thinkers who were not given space by “the powers that be”, complaining in a newsletter in 1981 that they were being ‘written off as… left-wingers or even worse, Christians!’. For the BLSM, the allure of the natural world was its freedom as well as its simplicity. The localist societies wanted to see the environment of Bristol change, but they still wanted to live in it as a modern city. The globalist anti-authority perspective of the BLSM on the other hand led the want to escape the city entirely, whilst ultimately never physically leaving it. The method to achieve this escape therefore was to make your life more “natural”, more alternative, more out of step with everyone else’s, to detach yourself from what the BLSM saw as a morally vacuous normality within a global community of people also doing so.

    Bristol Friends of the Earth

    Bristol Friends of The Earth (BFOTE) was different from the other societies discussed because it was a wing of the much bigger national organisation. Their agenda was very focussed on international issues just like the BLSM, but it went further in really advocating for internationalism as a philosophy. After their formation in 1972 they set up a ‘World Studies Centre’, stating that they wanted to encourage British society to be more ‘outward facing’. In their language and their actions they far more closely resembled the environmental organisations of today than the other societies thus far discussed and their goals overall were more focussed around an understanding of the natural world as something to be protected for its own sake. The CHIS, BVEG, and BLSM did all care to greater and lesser degrees about the state of the natural world, but in the context of how it could then help them with their own human existences. The health of ecosystems as a whole was not necessarily paramount if what existed was sufficient to achieve their desired ends. This is not to say that BFOTE were not concerned with themselves, it is that they saw themselves and nature to be one and the same, writing in a 1981 bulletin that ‘we cannot afford to trade off the integrity of the planet’s life support systems against short-term economic gains’. They conceptualised of the ecosystem as a whole, of which all parts were needed for the machine to operate, including humanity. Nature was not something you could choose to incorporate into your lifestyle or neighbourhood as a means of improving your quality of life, it was something you needed to conserve in order to protect all life¸ including your own.

    It was on these grounds that BFOTE participated in numerous campaigns during their early years against excessive packaging, the Bristol ring road, the Severn barrage, nuclear power, whaling, non-returnable bottles, the Dartmoor tungsten mine, heavy-lorries, and the use of lead in petrol. The variety of these campaigns, on both local, national, and international matters demonstrates that the group saw the protection of the natural world as an issue that spanned all of these spheres; they were all tied to each other and to humanity. As they wrote in a 1982 bulletin, the ‘issues of environment and development are inextricably linked’. BFOTE saw the environment and nature as integral to their internationalist agenda. For them, nature was a force that transgressed national boundaries both in the sense that non-human life pays little respect to the borders of countries, but also in the international cooperation that they saw as required in order to tackle issues of environmental destruction. 

    The group also ran a number of campaigns to encourage cycling and advocate for the insulation of homes for the elderly. They involved local schools in ‘pollution studies classes’ where they taught both children, teachers, and parents primarily about the dangers of leaded fuel but also ‘the need for energy conservation’. They were consulted with in the building of Wick Primary School on the outskirts of Bristol, which was built to be energy efficient and utilise renewable energy. One of their largest campaigns was titled ‘spot the blot!” and asked people to ‘take notes on gross examples of air pollution, filthy rivers and beaches, noisy factories or road junctions, despoiled countryside etc.’ and report them both to their local council and BFOTE. These programs were a way of joining care for the natural world with the bringing of local communities together. This is a theme that all the groups studied in this essay adhere to. Indeed, if there is one factor that binds the ideologies and philosophies of all these groups together during this short period it is that of community and a sense of place. All of these societies, including those that didn’t conceptualise of the natural world as central to their goals as an organisation, felt that in the creation of modern Bristol something was being lost that nature could help to regain.

    Windmill Hill City Farm and the Avon Gorge Hotel

    Two projects during this period best exemplify this trend, the creation of the Windmill Hill city farm, and the campaign against the creation of the Avon Gorge hotel, two projects that all these societies supported. The Avon Gorge hotel was a proposed extension to the existing hotel that would be both larger than the original and would sit just beneath it in the valley. Its proposed design followed the brutalist aesthetic that was popular at the time, however it was not with the members of the CHIS, BVEG, BLSM, and BFOTE. They described it as a ‘monster hotel project’ that would ‘generate even more traffic’ and declared it would ‘destroy the existing balance between natural environment and townscape’ in an open letter they all signed in 1971 from ‘the citizens of Bristol and residents of Clifton’. The scale of the proposal was enough to bring everyone together against both its aesthetics and environmental consequences. That natural space mattered to the identity of the city.

    For the founders of Windmill Hill city farm, known as “the dustbin group”, community and identity were central to their project, indeed they were the reason behind it. The project began in 1976 and was created out of an area of land that had previously been housing but had been heavily bombed during the war and had lain derelict since. In the first public document the dustbin group produced they were very clear as to the reasons behind their project:

    ‘local government structure plans of the 50s and 60s took little into account of the needs of inner-city communities – land was rezoned, urban motorways planned, industries relocated, houses and shops demolished, land left derelict and thus communities destroyed’

    They felt left behind and overlooked, and what characterised that abandonment was rubble and motorways and crumbling buildings: urban decay. What a farm could achieve then, with trees and sheep and vegetables, was a departure from that. Nature was both a representation of, and a means of, urban renewal and the building of community and identity. The dustbin group’s main argument in favour of the farm was that it would bring jobs to the area through ‘community industry, project staff, and work experience’. It would repurpose derelict land, create jobs and a community hub, and prevent the construction of a lorry park that had been proposed to be built there. The conservation of nature or the health of their lifestyles was not on their minds, these things were simply tangential benefits of the project.

    Conclusion

    Why did all these societies, projects, and campaigns emerge at the same time in Bristol during the late sixties and early seventies? Threats to the environment and earth’s ecosystems were not new. Deforestation, species extinctions, poor air quality, and urban sprawl had existed for decades. What had changed was the character of Bristol itself in the wake of post-war redevelopment and the rise of the car. These changes highlighted to many people the importance of the natural world to their lives, but not in the same way for every person, hence the variety of societies created. For some it made them recognise its importance in town planning and architecture, for others in their lifestyles, and for others in the wider picture of global ecosystems and environmental forces. The natural world was and is many things to many people, and these societies, with their divergent philosophies around the relationship between human and environment, demonstrated that. The common strand that ran between them all at this time however, and the spark that ignited the explosion in the creation of these sorts of groups, was a fear that the communities and identity of Bristol were being damaged and a belief that nature could act as a force to heal them.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 28th of May 2020

  • The Child Botanical: A Case For Exploring The Intersection Between Environmental History and The History of Childhood

    Childhood in Britain today, as a concept, is extremely precious to society. Children are pure, innocent, as of yet uncorrupted by the world; theirs are ‘the hands by which we take hold of heaven’. They represent the future, both very literally but also conceptually in that they are symbols of potential change, an opportunity to imagine how the world could be different. If this is true, then what is adulthood? It is the antithesis: a corruption of innocence, a loss of purity, and a symbol of the status quo. Our idea of nature, of what is “natural”, is significant to such concepts. We see the adult world as constrained, urban, and interior whereas the child’s, ideally, is unconstrained, rural, and exterior. The outdoors: forests, parks, beaches, and riverbanks are the “natural habitats” of youth, where children exist at their best, not the home, car, factory, or office. Children are lent a purity by association with these natural spaces. Simultaneously, they lend their own purity to them. Our conceptualisation of children and of nature tie them inextricably to one another.

    This concept of the “child botanical”, one that does not necessarily square with the reality of the relationship between childhood and nature, took hold in Britain with the advent of industrialisation. Its origins lie in Jean-Jacques Rousseau’s On Education (1762) which marked a departure away from the puritan belief of original sin towards the opposite, that we are born virtuous. In the 19th century romanticists such as William Wordsworth mainstreamed the idea and ‘the cult of childhood’ was born, leading to a boom in children’s literature from The Water Babies (1863)to Swallows and Amazons (1930). Such works and ideas nearly always tied children and the natural world together; becoming joint-symbols of an idealised natural purity that was being lost to the modern world of factories and smog. Today that sense of loss is still prevalent, felt at both societal and often deeply personal levels, and it now has a name, “cultural severance”.

    In post-war Britain this relationship gained a new dynamic with a sharp rise in population, the car, and resultant urbanisation. This is the period which holds the most contemporary relevance because it is that of the childhoods of today’s adult population who have seen (and overseen) during their lifetimes a transformation in the way children interact with their environment. Many equate the degradation of the natural world they have seen over time to a deprivation of childhood. But are they right? Is there such thing as a “special relationship” between nature and child? Why has the concept proved so appealing? Why is it that children have found themselves at the centre of the debate around the present climate crisis? Why does the 16-year-old climate activist Greta Thunberg draw so much admiration, but also so much hate? As the effects of climate change steadily encroach further into our daily lives, issues around the relationship between childhood and the environment are only being brought into greater prominence. Climate change is being cast not only as a physical attack against children – in the form of pollution – but also a conceptual one; an attack against the “child botanical”. Greater study of that relationship, and how it has changed over past decades, is therefore both important and timely.

    However, children as physical beings, as supposed to concepts, are prone to act antithetically to the ideals they are held to. What children decide to value within their environment is ultimately up to them, and their choice versus the expectation of it can prove disruptive. Sometimes a rolling pastoral landscape is boring whilst a busy industrial site is exciting; sometimes a dead animal is more intriguing than a living one. To many children nature isn’t something to protect; it is to be used, to be played with. Likewise, the natural world doesn’t always show respect for the purity of the child, indeed, children are more prone to its dangers. The point here is that the relationships between children and environments are necessarily their own. They are unique and fundamentally different from those of their elders, transgressive even, and yet in history they are paid little attention to. Despite the importance we place on “childhood” and “nature”, history seems to indicate we value these more as concepts than as realities. Such relationships can no longer be deemed ahistorical, for they offer what historians constantly seek, a new perspective.

    Some works in recent years have begun to tackle such issues and the “the environmental history of childhood” more generally. All focus so far has been on America, where this concept was first toyed with in Elliott West’s Growing Up with the Country (1989). It is clear West thought this an area criminally underexplored in history, describing it as being ‘at best embryonic’ despite the subject matter being ‘of some of society’s most important, interesting, and perceptive members’. It is important to note that West did not set out to make his history of childhood environmental, this was simply the logical direction that study led him, a clear indicator towards the natural fit that these two fields have together. West found that the key difference in how children and adults related to the frontier was in their relationship to the environment. For children the flora, fauna, weather, and land were not symbols of a “frontier” at all, they were home, the only world they knew. Therefore, their relationship with said environment was fundamentally different from their parents, they held a unique ‘kinship’.

    Since 1989 however, whilst the respective fields of the history of childhood and environmental history have both grown, they have had little interaction. To date there has been only one book that has explicitly sought to write an environmental history of childhood: Pamela Riney-Kehberg’s The Nature of Childhood (2014). Again, this is an American work, focussed on the mid-west from the 19th century to the present. Therein, Riney-Kehrberg does an excellent job of explaining the methods by which American society has sought to control the relationship between child and environment and charts the increasing restriction of children’s spaces over time, particularly in urban environments. Today, she argues, only the indoors is considered a “safe space” for children. The work overall lacks a degree of nuance, however. The past is always framed as a ubiquitous gold standard from which things have only ever deteriorated, specifically since Kehrberg herself was a child in the 1970s. As a result, the work at times feels as if it was written by a stereotypically miserly elder complaining about “kids these days”, bemoaning that children don’t play outside anymore because they ‘spend their leisure time at soccer matches, watching television, or looking at their computers, cell phones and video games’. Whilst the use of a declensionist narrative is understandable to an extent, the issue is not so clear-cut as to pronounce the current generation utterly deprived of environmental understanding. Could it not be equally argued that children today have a far greater awareness of the environment as a whole than those of generations prior?

    Issues of control are thus also at the heart of this discussion. Because the environment and children are both seen as being malleable, easily influenceable, almost helpless, there is dialogue to be had about how society decides to try and control that relationship. Do we seek to regulate it because we see too little of the innocence that is supposed to exist there? We think of children as pure but also as naïve and unappreciative of the status they hold. They must be taught not to trample on the daisies or build dens from silver birch, to appreciate nature correctly. If they interact with nature in the “wrong way” then they must have been misguided, so we must provide them with the proper guidance to make sure they do it the right way. Similarly, we see nature as being simultaneously virtuous and dangerous. Wild spaces, be they rural or urban, are borderlands on the fringes of society where children can often demonstrate greater degrees of control and independence. The rules are less clear, and the physical space is unorderly and anarchic. At the same time these spaces can be where adults are strictest in their policing, designating pathways not to be strayed from, putting up signs telling you to “keep out for your own safety!”, or preventing entry altogether. Thus, we have the paradoxical concept of wanting to protect the environment and children from one another but also wanting them to exist together as much as possible. How did such ideas come about in Britain? What is encouraging (or discouraging) people to regulate the child botanical?

    The methods we use to this end are various, through schooling, scouting, and stories of all sorts. There are a great many number of organisations dedicated to “introducing” children to the natural world, and the focus of children’s media on such themes is intense. Through charming anthropomorphisms, tales of adventure in exciting wildernesses, and escapes from the dreary adult world, we are desperate to instil a love of nature into our youth. This is another of the ideological complexities in how we understand children and nature; that we as adults know better how to treat the environment despite children supposedly being closer to it than ourselves. Humanity tends to cast itself as a warden figure, a guardian over the “defenceless” children and environment; is the high value placed upon them due to how much we value them as independent actors, or as possessions? Examining how people have sought to influence the relationship between child and environment, and how and why that influence has changed over time, can contextualise the relationship we know today and offer perspective on how it might change.

    Furthermore, the idea of “children” as a cohesive category is a problematic one. Differences in class, race, and gender play just as much of a role in the lives of children as they do of adults, and yet the ideal of the child botanical has its roots in the work of white, middle-to-upper-class men. Similarly, all the current research is based on American childhoods, what differences might we find in Britain? The environment is an extremely variable factor to, from urban to rural, north to south, highland to lowland. Might it not be that children who are products of different social and societal influences will approach nature in different ways? Is our ideal of childhood the right one?  The rebelliousness of children can be enlightening in these respects, their unfamiliarity with the established order making them more likely to question or transgress it, undermining what adult society says certain types of people are a “natural” fit for. Ultimately, our current understanding of how children independently think about and interact with nature is limited.

    Examining how the relationship between the environment and childhood has changed over time is thus an insightful and important enterprise. This is an area of study that is both bitingly relevant to the present day, surprisingly underexplored, and delves to the heart of heart of contemporary issues around urbanism, social control, class, generational divides, safety, and of course the environment. The value society places on children means that demonstrating how environmental issues affect them can lead to greater value placed on environmental issues. Highlighting children’s points of view as an alternative to mainstream society also asks us as adults to re-examine what we value within the environment and childhood and re-assess how we present these things to each other. It asks us to incorporate children into our thinking about the spaces in which we live from their own perspective, not how we would wish them to be. It asks us to consider which “version” of childhood we are seeking to promote in society.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 7th of January 2020

  • Dead River: An Environmental History of the Tyne Improvement Commission 1850-1968CE

    For men may come and men may go,

    But I go on forever.

    – Alfred Tennyson, ‘The Brook’.

    Introduction

    The Significance of River History

    When searching for a location to build a home, the humans who founded the first settlements on the Tyne had several priorities in mind. First and foremost, they needed access to a sustainable source of food and water, but they were also looking for a site that was defendable, sanitary, and well connected, to facilitate fast travel and trade with the outside world; the river was the only logical place that satisfied these objectives. Soon many human settlements had clustered around the Tyne’s banks and over time the people of Tyneside built their houses, economies, and cultures around the river upon which they relied for survival and expansion. As time progressed this bond grew tighter as they discovered that the waters could be utilised for other purposes; generating power, producing chemicals, concrete, oil, and many other industrial materials, as well as in being a source of recreation. In this way they followed the global human trend of using the river as a basis for civilisation. Likewise, the Tyne itself, alongside all the other life it supported, found its fate acutely entwined with human developments.

    As historical agents, rivers and the life they support have never acted as passive resources to merely be consumed; time and again they have proved to human populations that they can knock civilisations down as easily as they built them up. In c.5000BCE, the fortunes of the peoples of Mesopotamia were dashed against the banks of the Tigris-Euphrates after repeated flooding, partially blamed on their own attempts to direct the path of the river. In c.2000BCE a 200 year drought hit the Indus river and spelled equal disaster for the peoples of the Indus Valley civilisation. In c.350BCE, it is theorised that the Guadalquivir river delta rose up to completely submerge the wealthy city of Tartessos based upon it, thus creating the origin of the Atlantean myth. In contemporary times rivers have become more entwined with human societies than ever, relied on as sources of food, water, culture, trade, and recreation, however they are also facing some of their greatest threats from the same source. The quantity of pollutants being discharged into waterways such as the Nile, the Ganges, and the Yangtze, as well as the effects of landscaping and unsustainable water use, is resulting in ecosystem collapse. This has resulted in ever-increasing quantities of resources being spent in efforts to save these unique environments both for their own sake and for humanity at large. The story of the relationship between human and river is an ancient one, and in the modern day is as important as it has ever been.

    Across history the story of the River Tyne is one that parallels that of the modern Nile or Ganges, and it maybe holds some lessons for them. The pertinent period to assess in regards to this began in 1850 when a body named, in retrospect perhaps ironically, as the “Tyne Improvement Commission” was appointed by parliament to increase the volume and profitability of trade on the river. This organisation’s conservatorship of the river would last until 1968, and whilst not solely accountable, it was predominantly responsible for the transformation of the river during this time from a natural estuary into, in their own words, ‘a great highway of industry’. Environmentally speaking, their century of “improvements” meant that in 1957, when the Tyne’s waters literally bubbled with noxious chemicals, the river was officially classified as ‘biologically dead’.

    Methodology and Historiography

    The focus of this article is therefore upon the Tyne Improvement Commission (TIC) and the unprecedented changes that they oversaw during their 118 years of authority. The primary route of analysis is through the extensive records that the organisation kept of their proceedings, documenting step-by-step how they went about their program of transformation. From an environmental perspective, these sources are used to assess the impact of the TIC’s works upon the river and its ecology and how those impacts then affected humanity in turn. Their discussions are also analysed to come to an understanding of the philosophy behind their actions. Ultimately it is a study of the relationships, both physical and intellectual, between humanity and the rest of the natural world as they developed during the TIC’s tenure and the ways in which they intersected with one another. A river is a complex, interconnected ecosystem where disturbances on the waters can ripple outward beyond foresight, therefore it only makes sense to assess it as such.

    What this article also provides is a counternarrative to the traditional histories of the region. Much of Tyneside’s modern identity has been built on its industrial heritage, for which it is proud, and a significant majority of its written history has forwarded a narrative where the river’s “golden years” are the same as those which resulted in the pollution and destruction of much of its natural resources. The modern Port of Tyne describes the appointment of the TIC as the beginning of ‘the heyday of the river’, but from the perspective of the river itself this is far from the case. It would be untrue to say that environmental concerns have been omitted in entirety across the historiography, although this is sometimes the case, but it would be fair to state they have been largely disregarded. The history of the Tyne’s shipyards, mines, and factories is far from something to be ashamed of, but it is, this article argues, something to be reconsidered and taken in duality in the light of an understanding that the benefits of industry came at a significant cost.

    The predominant concentration of previous histories of the river has been on the human activities that took place upon it; the history of export statistics, employment rates, and commerce. In 1880 James Guthrie’s The River Tyne: It’s History and Resources gave little attention to ecological concerns, focussing on the feats of engineering that had been so successful in remodelling the river’s form in his recent years. Throughout the 20th century this trend continued with texts that also focussed predominantly on human achievement and engineering such as Life on the Tyne, The Origins of Newcastle upon Tyne, and Maritime Heritage: Newcastle and the River Tyne. The same is true for the texts of the 21st century, such as The Story of the Tyne and River Tyne. All of these are fine publications which competently examine many aspects of Tyneside’s history, and indeed all were useful in the writing of this article, but it must also be said that they neglect environmental angles. Not all histories can or should be environmental histories, but the extent to which the natural history of the river has been buried beneath fascination at industrial achievement, even to this day, is surprising.

    One text, however, has acted as an exception to this rule, and has taken an ecological approach to the river’s history, this being Leona Skelton’s Tyne after Tyne: An Environmental History of a River’s Battle for Protection 1529-2015. Tyne after Tyne looks at the history of human environmental action and conservation on the river and therein Skelton analyses how approaches and attitudes to the Tyne have varied over time regarding conservation of its natural resources. This study has opened the field of environmental history on the Tyne and has revealed a forgotten and often ignored, yet fundamentally integral, facet of its past. Where Tyne after Tyne covers a broad time period however, this article more tightly directs its attention toward one specific stage of the Tyne’s environmental history, exploring it in greater detail and looking at the physical effects of that environmental action upon the biosphere.

    The importance of the relationship between human and river is one that has always been appreciated on the Tyne, but the importance of an environmentally sustainable relationship is one that is now having to be re-remembered. Indeed, for a majority of its history before the formation of the TIC, the citizens of Tyneside managed to live in comparative harmony with their river, and not because they lacked the technology to do it harm, as the Mesopotamians prove. It is crucial therefore that we understand the pre-industrial history of the river as both comparison and context within which to assess the momentous changes it would face post-1850 which so fundamentally shifted the ecological landscape.

    Chapter 1

    More lasting value than californian gold

    The Deep history of the tyne

    The history of the River Tyne began at the same time the British Isles rose from the sea 30 million years ago. Just north of Kielder at the Scottish border the north Tyne emerged and meandered eastwards before travelling south towards Hexham. The south Tyne began in Cumbria, flowing over the limestone rocks of Cross Fell and feeding into the north river at Warden Rock. At this meeting point they then processed eastwards, sculpting a valley out of the chalk which had formed 40 to 80 million years before. The formations made during these early chapters in the Tyne’s history have proved influential on its development thereafter, the movement of glaciers and other fluvial processes being the key instruments which created the landscapes and habitats that have dictated the character of the valley ever since. The result of these processes was that the Tyne region became naturally isolated from other parts of the country, establishing an environment that was ecologically unique for the plants and animals that occupied its banks. After humans arrived, this isolation drew people closer to the river as it created a greater need for water-borne trade.

    These ancient geological processes created the environments on which all life in the region has since been based, the Tyne’s mudflats, riverbanks, and tributaries encouraging the specific types of flora to grow and fauna to breed that have since become local to the region. Pink salmon, river otters, and water voles alongside rarer creatures like the kittiwake, white-clawed crayfish, and the freshwater pearl mussel all chose the Tyne for these characteristics, as did the human. Outside of wildlife, the prevalence of lead and coal on the banks of the Tyne has been extremely influential on its history ever since mining started in the 2nd century, and the abundance of gravel on its riverbed became a valuable resource in the creation of concrete in the 20th.

    Preservation for Profit: The Corporation of Newcastle

    The corporation of Newcastle could be described as the progenitor of the TIC, although the two organisations took considerably varying approaches to managing the river. It rose to prominence in 1319 when it was granted royal conservatorship of the Tyne between Sparrow-Hawk and Hedwin streams at the expense of rivals south of the river (in this context “conservatorship” meaning the preservation of commerce, not ecology). Soon after it acquired exclusive royal licenses to dig coal in 1330 and by 1530 it had been made illegal to load or unload goods anywhere along the river except from the city of Newcastle. Through taxes, trade, and tolls the corporation absorbed the majority of the Tyne’s profits and became efficient in preventing other townships from tapping its wealth. Alongside hundreds of minor blockages it brought major successful petitions against South Shields, Jarrow, and the Bishop of Durham to prevent them loading ships, building wharfs, and exacting tolls. In this way, the Newcastle Corporation acted as an unlikely force for ecological preservation, preventing redevelopment of the river as a means of blocking rivals’ opportunity to turn a profit.

    The nature of the corporation’s trade, being predominantly in hides alongside wool, fish, and corn (although coal was profitable and growing) also acted as a force for environmental conservation. These industries, being based on natural products, were considerably more reliant on the health of the river than those, such as coal, which would dominate the Tyne in later centuries and so this gave financial incentive for the Newcastle corporation to care for it. Local flora and fauna was also what the population of Tyneside predominantly survived upon in terms of sustenance as well as economics. Additionally, even if it did not fully understand the science behind the impacts of dumping in the river, the corporation was still very aware that its relationship with the Tyne was a ‘two-way process’, that their fortunes were bound; knowing the river’s tides and currents, and knowing where it was shallow or deep, or the best spots for fishing, was integral knowledge for the corporation’s success. It knew that the status-quo was profitable, and was therefore wary of change.

    The way the corporation managed this was through a “river court”, which it set up in 1613, soon followed by a conservancy commission in 1614. The river court, complete with river jurors and water bailiff, was held weekly and was used to impose fines on those who would ‘do harm’ to the water. This was meant in an economic sense, but it is clear that environmental and economic prosperity were inseparable in these cases, as they were so closely tied together. This approach was very effective, and Newcastle-upon-Tyne grew wealthy as a result of it. William Brereton, after a visit to Newcastle in 1635 remarked that it had become ‘the fairest and richest town in England’.

    Figure 1. A reconstruction drawing of 16th century Newcastle-upon-Tyne.

    Ballast Dumping in the 18th Century

    However, the impression must not be given that the river laid completely unsullied before the advent of the TIC. The Newcastle Corporation was not an environmentalist organisation and its river court was not created out of a desire to protect the natural world for its own sake. This is best shown by looking at the 18th century, in which the extent of trade on the river began to increase substantially. For many years beforehand ships had been dumping ballast into the river with less than stringent regulation; the entirety of Newcastle-Gateshead’s quayside had been created via a slow process of the filling in of old docks with silt to eventually form a platform of land. By the 1700s however, the extent of these depositions was causing the already narrow and shallow waters to grow narrower and shallower; indeed, at low tide you could wade across the river at the point where the present swing bridge stands. More importantly for traders however, the tides were pulling the ballast downstream towards the mouth of the river where it was feared the ports would become clogged to such an extent that commerce might be halted altogether.

    The health of the ecosystems within in the river were also being affected, as the sediments were burying habitats, thus reducing aquatic diversity. However, ultimately the environmental impact was not highly significant because it did not fundamentally change the environs of the river as the dredging of the same material would do in later centuries. The Tyne was already a shallows environment and the ballast, whilst causing some damage, did not alter this and was composed of non-toxic natural materials such as sand, mud, and rock. The ecological records available from the time, concerning concentrations of fish in the river (important to the fisheries of Tyneside) endorse this point, suggesting that the river was not only as healthy as it had ever been but was, in fact, healthier. It was recorded that on a single day on the 12th of June in 1755 more than 2,400 salmon alone were caught from the river. In comparison, over the entire month of June in 1996, only 338 salmon were recorded in the Tyne. The 1755 numbers may have been even higher without the dumping of ballast, but levels of local life were evidently not notably adversely affected.  

    In principal the corporation had always been against unlicensed ballast dumping into the river, this was partly the reason behind setting up the river court. In this case, however, it did not strongly push back against this process. Predominantly this was because these build-ups of sediment were creating new land along the riverbank, valuable land which, under the law, automatically belonged to the corporation. In this case, even after it was allotted government money to clear the silt in 1765, the corporation took the opportunity to turn a profit at the environment’s expense. Given the catastrophic impact that dredging would have on the river under the TIC however, it could equally be argued the corporation unwittingly took the more environmentally conscious approach in this decision. Either way, complaints from navigators on the state of the Tyne continued well into the 19th century.

    The Enlightenment and Proto-Industrialisation

    Whilst humans’ physical relationship with the Tyne may not have changed substantially in the 18th century, what did change was their attitudinal relationship; a shift which laid the groundwork behind the ideology of TIC. The enlightenment was the primary movement behind this change in relations, a philosophy with humanism at its core and a belief that scientific empiricism would lead humanity towards the conquering of the natural world. For the rational, orderly ideals of the enlightenment, the mercurial, muddy, meandering Tyne was something antithetical, something to be controlled. However, after these philosophies became popular the Newcastle corporation did not immediately set off on a crusade against the Tyne as the TIC later would, for three main reasons. The first reason was, as previously explained, that the corporation had been made extremely profitable by specifically avoiding tampering with the river’s natural systems. The second was that it lacked the technological ability to landscape a waterway such as the Tyne, or at least the ability to do it in a way that would not be prohibitively expensive. Thirdly was the fact the organisation’s frameworks and regulations had been set up hundreds of years before the advent of the enlightenment and adapting to fit this new ideology would mean a reinvention of what the corporation had stood for since 1400, a reinvention which never took place. It was also the case that enlightenment ideals were not fully pervasive, and many people, especially those in nature-based industries such as fishing, were sceptical of attempts to control it. Even in 1850, on the formation of the TIC and at the height of frustration with the Tyne’s unnavigability, the Shields Gazette wrote an endorsement of the river’s natural state, saying it was ‘of more lasting value than… Californian gold’.

    Figure 2.A view of Proto-Industrial Gateshead in 1830.

    By the beginning of the 19th century however the physical landscape of Tyneside was beginning to match its ideological, despite the inactivity of the corporation. At Derwentcote, Winlaton, and Lemington were ironworks, and two glass manufactories. At Blaydon was a lead refinery, a flint mill, and a large pottery and at Derwenthaugh was a coke manufactory and coal tar ovens. The first Tyne tunnel was built at Wylam to transport coal under the river. The river was also home to two coal staithes and a number of lead mines, both materials having been mined on Tyneside since the Romans built its first bridge in 122AD. These were the first buildings to begin washing substantively harmful substances into the waters such as coke breeze, benzene, naphtha, ammonia, and phenol. However, without chemical testing, at this stage these industries were too new and too few for people to properly appreciate the harm they were causing to the river. An 1827 report from the topographer Eneas Mackenzie does approach this topic however, noting that levels of salmon in the river may be declining because of the ‘deleterious mixtures that are carried into the stream from the lead-mines and various manufactories on the banks of the river’. It is evident from this statement that Mackenzie was somewhat aware of the environmental damage being done to the river but what is also evident is that he does not consider the decline in river life to be an inherently bad event. The fact is only mentioned off-hand and quickly forgotten in his excitement around the wonders of industry.  

    In 1816 the corporation commissioned the engineer Sir John Rennie to create a report of suggested changes to the riverfront. Therein he recommended the construction of two piers at Tynemouth, embankments along the river as far up as Newcastle, and multiple quays; all of which would have to be accommodated through a program of extensive dredging and landscaping. His stated goal was to ‘direct the river in a straight, or at least a uniform course’, an idea very much in line with enlightenment ideals. The corporation however, still unwilling to instigate change, did not act to implement Rennie’s suggestions and this increased the growing frustration at the state of navigation on the river. Thus, the Tyne Navigation Act was passed in 1850, which resulted in the formation of the TIC. The organisation immediately set about its work of dramatically altering the landscape of Tyneside and by the end of the century, it had implemented all of Rennie’s suggestions and more so, creating a deep and orderly channel. This quickly resulted in the decline of Tyneside’s keelmen, whose entire trade had been built on the premise that large ships could not navigate the river’s shallows, but it also resulted in severe declines in plant and animal life, as well as the overall health of the river.

    Chapter 2

    A great highway of industry

    Reconceptualising the river

    The works of the Tyne Improvement Commission completely transformed the face of the river on a scale that had only ever been previously achieved through millions of years of geological landscaping; they also began an era that would result in the worst pollution the river has ever seen. The men, and unsurprisingly for this time they were all men, who constituted the commissioners for the TIC were a mix of local councillors and business owners who’s trade was located in the riparian zone, the majority of which were based in the coal and shipbuilding trades. Whilst this commission was officially unbiased, the more wealthy and powerful members were often able to exert their influence for their own ends. In their proceedings for 1875 for example, we can see how Lord William Armstrong was able to rush through expansion plans for his factory at Elswick without the usual scrutiny period of one month.

    Together however, the commissioners were united in a common goal, to make the Tyne as profitable as possible. This was the very purpose that the TIC had been set up for and its members ‘deeply’ believed in that task, with no thought towards environmental affairs unless they were to infringe on profits. Indeed, across all their proceedings papers of over 100 years of history the TIC demonstrates no discernible changes in attitude towards the river or their own purpose upon it; their proceedings in 1894, 1902, and 1945 all specifically stating that their prerogative as “conservators” of the river was not to look after its natural state, only to keep it in a condition suitable for facilitating trade. One proceeding from 1958, as the commission was reaching the end of its lifetime and as environmental concerns towards the river were growing in popularity, best demonstrates this intransigence. When the commissioner who represented South Shields, Mr. Gompertz, inquired as to the ‘risks we are running in further pollution of the river’, in relation to allowing sewage to be discharged directly into the water, the chairman, after some debate, responded that they had ‘no powers on that matter at all’. This statement is astounding given that the TIC specifically was the body that was responsible for the approval and regulation of sewage systems at this time. Evidently, they did not feel that environmental concerns constituted a legitimate reason for regulation in 1958, just as they hadn’t in 1850. They had similar reactions when requested in 1881 to help with the building of recreational facilities such as a rowing and sailing club, denying that this was their responsibility.

    This consistency of approach and unified direction of purpose is one of the astounding facets of the TIC, and perhaps one of the reasons behind its success in so categorically remodelling the river. This was not an organisation that passively and indifferently carried out its task, it actively pursued a vision and cared deeply about its planned “improvements”. The Tyne needed to be competitive in a global context, with the infrastructure capable of matching other industrialising rivers such as the Thames, Clyde, and Rhine, which could also be called inspirators for the TIC. In 1876, long before most of their works were close to completion, they had already proclaimed that the Tyne was ‘the finest port in England… and the world’, listing its safety, capacity, and possibility as reasons for this. This attitude is completely maintained 75 years later in a document the organisation published in 1951 entitled A Century of Progress. In a manner that could almost be viewed as fanatical they write that ‘commerce is our life blood’; this was a capitalistic institution in its purest sense. In this manner the TIC bares some resemblances to the former corporation of Newcastle, both being organisations that were granted conservatorship of the river and both primarily being concerned with its economics. However, where the Newcastle corporation stood to profit from preservation of the river’s natural state, the TIC’s business model meant that it was indifferent to such concerns and payed them little attention. Indeed, as it stated in 1908, it would use ‘all that science and nature can offer’ to achieve its ends.

    Figure 3. Coat of Arms of the TIC above the doorway into Bewick House, Newcastle.

    Dredging the river

    In order for any of the infrastructural projects the TIC would undertake during its tenure to be worthwhile, such as the construction of docks, piers, and bridges, it first had to ensure that ships would be able to pass up the river far enough to access them. The solution to this was to dredge the river by removing the sand, rock, and mud that lay on the riverbed and dump them out to sea, a monumental task that had only been made recently comprehensible by the invention of the steam powered bucket dredger, the first ever of which had been employed in the neighbouring harbour in Sunderland. In 1850 the commission had access only to one steam dredger, which it had brought down from the river Tweed, but in 1853 they bought a second and by 1920 they had six all working to deepen and widen the river. These dredgers were tasked with ‘working day and night’ and so the citizens of Tyneside were forced to become used to their metallic clanks and churning coal-fired turbines.

    The commission’s reports comprehensively documented these dredger’s activities, as the organisation was very interested in maximising their efficiency, but they did not monitor the environmental repercussions that came with such work. The tests they did carry out, the first of which was in 1895, were concerned with the dumping of solid waste into, rather than the dredging of it from, the river. This was not because of environmental concerns however, but because the commission saw that this would result in an inefficient dredging process, the material being removed only to be replaced again overnight. The same reports make no attempt to measure or regulate the chemical composition of the water, only the solid material. Even in the TIC’s earlier years it cannot be argued that this was because of a lack of scientific understanding as the Tyne Salmon Conservancy (TSC, the body which represented the Tyne’s fisheries) carried out its own rudimentary chemical tests as early as 1866, being understandably concerned about the unhealthy state of the river.

    As the TIC was not concerned with such issues however, the dredgers continued their work and over a period of 70 years they deepened the Tyne from where it had lain previously at 1.83 meters to 9.14 meters. The commission’s own estimation for the extent of matter removed from the riverbed during this time amounts to the staggering figure of 149 million tons. This, alongside the TIC’s other infrastructural projects, had a huge impact on the capacity of Tyneside to trade on the river, with the Tyne becoming the largest repair port in the world by 1880, and the largest for the exportation of coal, producing 8,131,419 tons that year. Trade in total doubled on the river and the Tyne carried 1/9th of the total tonnage of the United Kingdom, second only to the Mersey, and built more ships than any other river aside from the Clyde. The cost of this was substantial to the TIC, over £3.5 million pounds (equivalent to £206 million pounds in today’s currency) which it managed to source from government grants, fundraising from local businesses, and its own taxation schemes. However, the cost was substantially higher for the flora and fauna of the river that soon found their habitats ripped from beneath them, a destruction which it is estimated will take hundreds of years to recover from.

    Figure 4. “King’s Meadow” island being dredged from the Tyne 1885CE.

    The first and most evident effect of this dredging was the replacing of the river’s natural shallow environments with much deeper, faster-running waters. The problem with this was that much of the local plant life, including reeds, lilies, pondweed, and willows, were not suited to surviving in such a habitat and so soon began to disappear from the riverbank, as well as smaller fare like phytoplankton, algae, and zooplankton. The resulting collapse of the ecosystem overall was then the result of a domino effect as each creature in the food chain found its food sources diminished. Much of the plant life had also acted as a habitat and spawning ground for many shallow-water life forms such as crabs, worms, shrimps, and fry as well as for insects like dragonflies, water boatmen, and other small invertebrates like the caddis and the mayfly. This in turn meant a decline in the predators that eat such creatures such as the mackerel, flounder, and seal alongside birds like the heron, turn, and kingfisher. In areas of significant dredging, the result was the complete removal of a shallow-water habitat and the creation of a deep-water habitat, which significantly destabilised the local biosphere.  

    If dredging only resulted in this alone there would have been a possibility of ecological recovery in a relatively short time frame as whilst much of the local flora was forced out, some of the hardier specimens could have survived in the new environment; plants such as the sedge, plantain, starwart, and sharp rush. Other plants better suited to the new environment would have also moved in, such as cordgrass and seagrass. However, the impact of the steam dredgers went further than habitat destruction. All dredging by necessity causes disturbance to the riverbed but this early form of bucket dredging was particularly dangerous in this regard. The scoops dug far enough down into the riverbed to reach the benthic zone, the sediment sub-surface at the lowest level of a body of water, and if they didn’t, the explosives which were also used as part of the dredging process certainly did. The reason this was dangerous was that the benthic zone of the Tyne contained many chemicals of a toxic nature such as lead, biphenyl, and tributyltin, which were not protected from being released into the water as with modern dredgers, and as such they acted as biocides which weakened or killed plant and animal life in the river. This is especially the case when considering this period of disturbance lasted as long as 70 years, the prolonged deviations from natural water turbidity also affecting the metabolism and spawning of certain creatures such as trout and the seeding of vegetation like sea grass.

    Whilst unenlightened as to the chemical specifics, the TIC could still observe the very clear fall in biodiversity on the Tyne and was to some extent aware that this was a result of its own work. Indeed, when writing a promotional piece for their port in 1925 the TIC not only acknowledges this, but is very much proud of this achievement, and not wholly unjustly due to the monumental feats of engineering that it required. Therein they write that whilst the old river may have been ‘picturesque’ it was now ‘the Tyne of yesteryear’, thus positioning the TIC’s modern creation specifically as the antithesis to ‘picturesque’. Overall the commission’s language in this document is indicative of their view on their own accomplishments, that they had made the post-1850 Tyne into something that was, conceptually, a completely different body of water to the pre-1850 article. In their proceedings of 1875, they wrote that their goal was to make the Tyne ‘equal to a dock’, to remove its status as a river entirely, by 1925 they believed they had achieved this. It would not be entirely incorrect to agree with the TIC on this point, as the ultimate result of their dredging program was the effective conversion of the Tyne from a river into a very large canal from Dunston downwards, as it remains today. What previously had been merely a river was now, in their own words, a ‘great highway of industry’. ‘Highway’ is the notable word to examine here, as it distinctly encapsulates the perspective toward the river which resulted in its conversion; that being a view of the river as simply a road made of water, a transportation device. The commission valued the river just as much as the Newcastle corporation or the TSC, perhaps even more so, but the nature of their occupations meant they no longer valued it as an environmental resource, only as a logistical one.

    However, the relationship between the TIC and the Tyne was not so unidimensional, and the commission soon discovered that their program of dredging would, to some extent, need to bend to fit the Tyne’s will. Erosion was their primary difficulty, as the TIC soon found swathes of riparian land collapsing into the river, much of it their own, although they were reluctant to admit in their proceedings that this was a problem of their own causing. The problem persisted throughout the TIC’s administration, and they had to deal with the erosion problems caused by dredging into the 1950s and 1960s, long after they had stopped deepening the riverbed, in places as far up the river as Haydon Bridge on the South Tyne. The reason this was occurring was because the dredging process had increased the gradient of the river, heightening its power on its new steeper course, and thus causing more aggressive deterioration of the banks. This effect was intensified because of the straightening of the river, which meant much greater force was exerted against the riverside, and this consequently created the need for weirs and embankments to be built along much of the quayside, although much of the time the commission was forced to concede and allow the river to carve at the land as far in as it required.

    Further to this the dredging resulted in the forces that the tides exerted on the river becoming far stronger, making the waters more turbulent and unsafe to travel on, as well as causing further erosion and silting up the docks, ironically creating much more work for the dredgers. In 1881 tenants in North Shields complained to the TIC that the force of the river ‘shakes the building’ and the northern rowing club also complained the following year that their casting-off point had been made ‘excessively deep and dangerous’ because of this. Such problems also affected large structures such as the Scotswood bridge, the company for which wrote to the TIC in 1884 complaining that its foundations were being undermined and it was at risk of collapse. The use of unpredictable explosives for the purposes of dredging was even more dangerous, as proved when, in 1894, the dredgers came within ‘90-100 feet’ of breaking through the roof of a mine that ran under the river owned by the Montagu colliery.

    Figure 5. Scotswood Suspension Bridge in 1832, Tyneside’s first industrial era bridge.

    The sheer scale of the dredging operation was also causing difficulties as the TIC found it increasingly difficult to regulate the process. Whilst there were only a handful of steam dredgers, there was a considerably larger number of ships known as “hoppers”, barges which were responsible for taking the silt from the dredging ships and dumping it into the North Sea. Two problems arose from this process. The first was that a number of hoppers were dumping their cargo too close to the shore and as a result it being washed back in to the river again, a problem which was expanded by the building of the piers at North and South Shields because they increased the area the TIC was obligated to manage. To combat this the TIC set a bylaw in 1885 that the silt had to be deposited outside of a three mile radius, but they still found that they were not always obeyed. Indeed, it was in the hopper operator’s financial interests to create more work for themselves.

    The second problem arose slightly later in 1900, after the program had been running for a while; what they discovered was that the sheer amount of material being dumped into the sea was beginning to threaten the passage of ships into the port because it was ‘raising the bed of the sea’. This also created problems for anglers both on the sea and the river, who found their nets and pots periodically smothered with ‘masses of refuse’, and often submitted complaints to the commission. With all of its technological might the TIC clearly thought of itself as an organisation that was above nature, that could do with the Tyne as it pleased, but the river proved time and again to be a force that could not be easily constrained and occasionally it would remind the commission that they were sometimes bound to playing on its terms.

    Reconstructing the River

    The dredging of the Tyne was only the preliminary stage, however, in the TIC’s plan. The removal of 149 million tons of material from the riverbed being simply the groundwork required which would allow larger ships to utilise the commission’s key infrastructure projects, these namely being the piers at north and south shields, the Albert-Edward, Northumberland, and Tyne docks, and the Tyne commission quay. A lot of resources were also spent on supporting projects to these large constructions such as the building of embankments, the swing bridge, and the destruction of rocky outcroppings.

    The construction of the Northumberland Dock in 1857 and the Tyne Dock in 1859 were the first major schemes to be completed under the TIC’s oversight, and work soon followed on another that would be named “Albert-Edward” in 1884. Significant excavations and further dredging was required for these projects, including the removal of tens of thousands of cubic meters of mud, gravel, clay, and sand; fortunately for the commission they received a good amount of private assistance in the removal as much of this material from firms such as the Wallsend cement company in 1877. This was especially true in regard to the gravel and clay, some of which was then used to create the base on which the docks stood. The Tyne commission quay, opened later in 1928, was built in the same manner and also with a small hydroelectric power station, an example of how the quickened current as a result of the dredging was utilised by the TIC to their advantage. This dredging caused all the same environmental problems as it did in the rest of the river but with the additional issue that the space was then entirely filled in with solid concrete, the Albert-Edward dock alone taking over 32,000 concrete blocks to construct, meaning the riverside and seaside ecosystems had no possibility of recovery.

    The single project which caused the TIC the most strife was the construction of the piers at North and South Shields. Partly due to the fact that dredging had caused dangerous tides to progress up the river, work was forced to begin ahead of plan in 1854 in order to protect ships in harbour, but the piers would not be completed until 1895 at a much higher cost than the commission intended of £1,000,094 pounds due to repeated damage from the force of the currents around the mouth of the river. After completion the saga was not over however as the north pier lasted only two years before being destroyed in a storm in 1897, and was only rebuilt by 1910 after bringing the total cost of the project up to £1,544,000. During all of this difficulty, in 1878, the TIC decided not to remove the “black middens” which were situated in front of the north pier because of their function as natural breakers which protected the coastline and, importantly, the walls of the pier, stating they were to a ‘general advantage’. These middens were infamous dangers to vessels, wrecking five ships in three days in a storm in 1864, but they were nevertheless so useful to the TIC that they were preserved. This case demonstrates that the TIC did not see itself as being on a mission against the natural world in all contexts. If a feature did not hinder, or even helped with their work, as with the black middens, they would be happy to leave it alone. By the same token however, they would not pause a second for anything which obstructed them, no matter its beauty or significance.

    Figure 6. North Shields pier collapsing into the sea, 1897.

    Generally, the TIC was quite happy to remove natural rock formations along the river however, if they were to get in the way of their “improvements”. As example to the fact that not everybody agreed with the prerogative of the TIC, two petitions were set up by the general public, the first in 1881 and the second in 1882, which were filed against the commission in attempts to save two popular natural beauty spots, Frenchman’s Bay and Lady’s Bay. Neither of these were successful, despite the fact they were signed by a great number of people of noteworthiness, including the mayor of Newcastle, the naturalist John Hancock, and a number of scientists with interest in the areas. Therefore the TIC carried on, removing a number of ‘protrusions’ at Felling Point, Whitehill Point, and Bill Point (amongst others) during the 1880s and significantly widening the river in one area near the mouth to create the Tyne main turning circle; both of these projects required the determined use of explosives over decades to complete. This is an example of the power of the TIC itself but also of the belief in the importance of its work, both from the organisation itself and from outside. The industrialisation of the river was an imperative, this was “progress”, unarguable and inevitable.

    Chapter 3

    NEITHER SALMON NOR CHILDREN

    Industrialising the River

    The TIC’s tentpole projects such as the docks, piers, and dredging program, whilst being the most impactful single enterprises on the environment on the river, were ultimately a drop in the water compared to the wider industrialisation that was taking place along the Tyne. The TIC was responsible for approving and regulating all new industries set up in the riparian zone, this task including the regulation of waste discharged into the river, but for the most part they took a laissez-faire approach to this duty. Their purpose as an organisation was to help, not to hinder, the growth of industry, and as environmental regulation would have hindered, they left it alone. Instead they acted as facilitators for the mass production of coal, coke, oil, timber, pottery, concrete, meat, iron, steel, glass, a vast array of chemicals, and all of their waste products. All these industries were built around the river (along with a multitude of smaller businesses) and they used it to help produce their goods, to transport them when complete, and to discharge their wastes into. Alongside these were the sewers of Tyneside, which were also approved and regulated by the TIC, and the number of which consistently grew across the TIC’s tenure until there were 270 active sewers draining into the Tyne when the TIC was replaced by the Port of Tyne Authority in 1968.

    The Tyne was thus party to a vast array of industrial processes and substances that it had never encountered before after the TIC took control, and a dramatic increase in those which it had. The primary categories for these substances in terms of environmental concern break down to organic material, organic chemicals, inorganic chemicals, sediments, and hot water. The effects of heat on natural ecosystems can often be overlooked but the amount of water from the Tyne which was used for coolant and then ejected back into the river still warm was enough to deal considerable environmental damage, this hot water predominantly coming from iron works, steel mills, and refineries such as Crowley’s iron works at Swalwell. The warmer a body of water is, the less oxygen is dissolved into it, whilst at the same time warmer water increases the metabolic rate of organisms within it, thus increasing their demand for oxygen. A reduction in oxygen in the Tyne therefore meant a reduction in the amount of plant and animal life that could survive there.

    The main culprit however in causing the deoxygenation of the Tyne was the discharge of organic material such as sewage and drainage from slaughterhouses, tanners, and flour mills, such as the Baltic Flour Mills at Gateshead. Once discharged these organic materials begin to decompose, the decomposition process being achieved by a flourishing of aerobic bacteria which are highly ‘oxygen hungry’ life forms. In the Tyne this occurred on such a large scale that the very first oxygenation test in 1912 concluded that there was ‘almost no oxygen’ in the river, which was alone nearly enough to end all life within. Once this had occurred it meant an increase of anaerobic bacteria, which produce foul smells. Other bacteria and viruses which were harmful to local river life, such as those of the Faecal Coliform or E. Coli varieties, also bred and spread quickly on this organic material.

    Organic and inorganic chemicals were likely far larger killers of river life than bacteria and viruses however and were indeed the killers of bacteria and viruses as well. Salts, acids, mercury, arsenic, benzene, naphtha, cyanide, lead, and phenolic wastes were all being ejected into the river from mines, farms, sewers, oil refineries, and coking plants such as the Derwenthaugh Coke Works which alone in 1928 pumped 1kg of cyanide into the river for every ton of coke produced. These substances were toxic to almost all river life, and toxic even at low concentrations, which they were not in the Tyne, and together were the one factor that caused the most damage to the Tyne’s ecosystems and led to it being classed as biologically dead in 1957.

    The dumping of sediments into the river, such as wood pulp, coal washings, and sludge was the one area where the TIC did attempt significant regulation. This was because of their dredging program, for which they did not want to create more work, and so they would inspect the discharges of factories to make sure nothing too solid was being ejected, first hiring Hugh and James Pattinson in 1895 to conduct tests to help them with this task. One substance they also attempted to prevent entering the river was oil from plants like the Benzol Works, which was the first place in the world to produce petrol from coal, because of the damage it caused to their property. It is evident therefore that the TIC would only step towards regulation if the environmental interests of the river aligned with their own economic concerns.

    Figure 7. Elswick Engineering Works, 1900.

    Turning Away from the Tyne

    In 1910, a street of houses in Lemington called Bell’s Close was erected along the riverfront. What distinguished this street from others that came before it, however, was that it was facing backwards, away from the Tyne. Indeed, the backs of the houses didn’t even have windows, they had turned away from the water because it had become ugly, foul smelling, and dangerous. This was exemplative of the larger trend that had been taking place all along the river, of houses and town centres moving further and further away from the river, being demolished in favour of factories and warehouses. After the first and second world wars, when industrial production on the Tyne began to decline, this resulted in the complete abandonment of much of the riverside¸ what had historically been some of the most desirable land available. A committee set up in 1969, immediately after the dissolution of the TIC, wrote that where Newcastle’s quayside had previously been one of the most overcrowded regions in the country it had now become a ‘neglected back alley’. Humanity’s environmental impacts had impacted on themselves. In 100 years the TIC had overturned what had seemed an inalienable truth for thousands, that rivers were at the centre of human civilisation. By 1940, the 1969 committee wrote, ‘neither salmon nor children could enter its polluted waters’.

    As the scale of the destruction became apparent, however, pressure mounted on the TIC from both the public and other organisations to do something about it. The primary driving force behind this was the TSC, which had been advocating stricter environmental regulation all throughout the TIC’s lifetime, but to little avail. In 1921 they helped set up the Standing Committee on River Pollution Tyne Sub-Committee (SCORP) which produced a number of reports with suggestions for how to improve the water quality, including a comprehensive sewerage treatment plan in 1936, but the TIC, the second world war, and a lack of funding blocked any progress. A Newcastle university report in 1957 said that public opinion ‘requires an improvement’ of the river environment and a 1958 motion in the house of commons recommended action for tackling pollution in the Tyne, but the TIC was as equally uncooperative with these as it would be with the Tyneside Joint Sewerage Board, set up in 1966. Just as with the Newcastle Corporation before it the culture of the TIC had become engrained, it saw itself as the heroic protector of orderly, profitable trade against the dangerous, unpredictable natural world. To a growing number of people however, the TIC had become the villain, too willing to sacrifice the picturesque for the profitable.

    CONCLUSION

    The success of the TIC was ultimately short lived when compared with its predecessor the Corporation of Newcastle, which lasted for nearly 400 years. For an environmental historian however this is not surprising, as they can appreciate the benefit that the Newcastle Corporation found in achieving a balanced relationship with the river on which it was reliant. Conversely, what the proceedings of TIC show us is that they did not look out for the health of the river, nor did they care for it. Instead they grew wealthy on the back of ‘robber industries’, trades that ‘carry the seeds of their own decline’. It cannot be denied that their works were marvels of engineering, and for some of the human population also brought great wealth, but to celebrate the reign of the TIC as the “heyday” of the river is a perverse anthropocentric notion that ignores the vast majority of Tyneside’s inhabitants. It operated in a way that was harmful to the health of all life based around the River Tyne, including the human population, and the scars it left are costing the region in the long run in the resources spent attempting to heal them. The modern Nile, Ganges, and Yangtze, whilst being far grander waterways, might do well to pause a moment and listen to the Tyne’s story, as they will find parallels and lessons within which they may wish to act upon.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 24th of August 2019

  • Why History isn’t Just for humans

    “On the planet Earth, man had always assumed that he was more intelligent than dolphins because he had achieved so much – the wheel, New York, wars and so on – whilst all the dolphins had ever done was muck about in the water having a good time. But conversely, the dolphins had always believed that they were far more intelligent than man – for precisely the same reasons.”

    – Douglas Adams, The Hitchhiker’s Guide to the Galaxy (London, Pan Books: 1979)

    It is not uncommon for historians to dismiss, or even scoff at, the notion that pigs, dandelions, or bacteria could be capable of changing the course of history as humanity does. Indeed, in a previous article I myself wrote that “History is only important to humans, it is a humanity after all. If no humans are affected, aware of, or record something, then surely it cannot be history.” In retrospect those words speak volumes toward my fatal ignorance about the reality of our planet’s complex ecosystems, a reality I will seek to illustrate herein. Furthermore, they reveal a sentiment born of a culture that deifies the human as the sole source of meaningful action and casts the rest of the biosphere as mere clay waiting to be moulded by others’ hands. In this article I will seek to put right these mistakes and to demonstrate the importance of all nature in affecting historical events, including as agents independent of human involvement.

    Initially, let us take the most obvious manner in which the natural world outside of humanity has affected history: as a resource to be consumed or manipulated. A human “survives biologically or not at all”, and ultimately it has been the desire to acquire food, shelter, and prosperity from the surrounding ecosystem that has driven the development of human societies. Historical epochs are defined by humanity’s exploitation of the natural world: Who would the Mongols have been without horses? What would the slave trade have been without sugar? Human action is fundamentally entwined with the rest of the natural world; where, when, and how other life survives has shaped our decision making as both individuals and groups throughout time, this much is generally accepted. What has proved more controversial is the idea that these historical relationships have often been more a story of “interspecies cooperation” than of dominance of one over the other. If we examine these histories from an ecological perspective, we will soon find that the story is not as unidimensional as commonly perceived.

    To return to our aforementioned hypotheticals, beginning first with the Mongol-horse relationship. Certainly, for the Mongols themselves their bond with the horse was as close to symbiosis as could be achieved; this extended as far so that horse and rider would be buried together upon death. According to Mongolian shamanic tradition your soul, referred to as the ‘wind horse’, would then be protected by the equestrian deity Kisaya Tngri. Horses were the centre of this culture and were valued creatures of status and importance, their cooperation with the Mongols was what allowed the successes of that empire, and the Mongols knew it. Sugarcane too, although traditionally seen as “a domesticated essence and nothing more”, has been as much a benefactor of its interactions with humans as vice versa, if not more so. The slave trade is an example of how far humankind will go to satisfy the desires of the plant, wilfully destroying the lives of many members of its own species in order to rapidly spread and cultivate it at the expense of all other competition. From the sugar’s perspective it appears like it is humanity that has been taken advantage of, spending huge amounts of labour to propagate the plant on an industrial scale whilst the sugar does little to help human propagation in return. Only this small change in perspective, to consider how other living things are being benefited or impeded by historical events, reveals the centrality they hold to them. It must not be lost sight of that all the social, political, and economic systems of civilisation are completely reliant on the ecosystem which supports them. To view nature simply as a resource is “to render it dead”, when it is very much alive.

    We have established that in cooperation with humans other living things are affecters of historical change, but what can we say to the notion that the native biota of our planet can be defined as historically significant for actions they take independently of humans? The question of whether cotton, cattle, or cholera has agency in affecting such things will be dealt with later, but with agency or without the literature increasingly concludes that living things other than humans can independently affect historical events. The most explored example of this in the field of environmental history is that pioneered by Alfred Crosby in texts such as The Columbian Exchange and Ecological Imperialism; Crosby brings to the fore the biological element of European imperialism, highlighting the spread, sometimes intentional and other times not, of flora, fauna, and microbes across the world alongside humanity. The ‘successes’ of imperialism, previously ascribed to solely human action, have been re-evaluated. The European “portmanteau biota” of rats, cattle, dandelions, and many others have been acknowledged for the intense disruption of local ecosystems upon their introduction to colonial targets such as the Americas, South Africa, Australia, and New Zealand. Many indigenous plants and animals were overwhelmed and began to recede in the face of the invaders, undermining the ecological foundations of native societies and causing economic, political, and social instability, an instability which the Europeans took advantage of. Even more crucial than this however was the role of imported European diseases which ravaged populations, smallpox alone causing demographic decline in some regions of up to 90%. In some instances the spread of this portmanteau biota was orchestrated and controlled by humanity, but in many others these organisms spread and propagated untempered, like an ecological tsunami, across those lands unfortunate enough to be targeted. From this independent action, many of the victories of colonialism were allowed, and thus history was profoundly changed in the process.

    This is by no means the only example of independent ecological action profoundly affecting historical events, often against the will of Homo sapiens. Disease particularly has provided a multitude of examples to point toward, given the difficulty humans find in influencing microbes. Infamous plagues such as the 14th century black death, 1918 Spanish flu, and the contemporary HIV/AIDS pandemic have all killed millions of people and absorbed incalculable amounts of human time, energy, and resources. In these situations the bacteria has used humanity as a resource to further its own ends. For plants and animals we can look to the recent jellyfish invasions of the Mediterranean and the black sea, completely unwanted by humans attempting to avoid “ecosystem collapse”. We could also look at the battle for water and oil taking place in Kazakhstan between human and cotton, currently being won by the latter. All such examples demonstrate that the natural world does not sit statically waiting to be interacted with and will change history with humanity’s permission or not.

    These prior examples have shown us that non-human living things have the capability to independently change the course of human history, but must we be so focussed on history as a human domain? Must our definition of ‘historical events’ be one that assumes the involvement of the human at all? If other life forms can influence human history then they must be capable of influencing their own; a simple change in the value historians place on the importance of natural history is all that is required to view the discipline as something that lives beyond a singular species. For some particularly intelligent animals this line of argument is considerably easier to argue than for others, as there is an increasing volume of evidence arguing that humans are by no means the only species on the planet capable of understanding the concept of history. Most spectacularly, Elephants have been observed to travel to the bone sites of their ancestors with their calves, seemingly in order to pay homage to them and pass on a shared history to their children. Similar behaviour has also been seen in dolphins, giraffes, Siamese cats, and even ducks. If other creatures have a concept of history then the notion of ‘historical events’ as uniquely human quickly appears an anthropocentric egotistical falsity.

    However, even beyond those wonderfully intelligent creatures that can conceive of the concept of history, humans included, we must still admit the presence of historical events. All living things have a history, and one that is important to their own future and all the life about them whether they know it or not. Would we seriously deny the extinction of the dinosaurs the status of ‘historical event’, given its significance in affecting the evolution of all life on this planet since? We see human historical events as significant because they satisfy human values of significance, humanity is the most advanced species on Earth only by its own parameters of what it has considered advanced. As Michael Pollan notes: “plants have been evolving much, much longer than we have… perfecting their designs for so long that to say one of us is the more “advanced” really depends on how you define that term”. Inherent in environmental history as a discipline is a certain amount of deflation of the human ego, to note that humanity does not always command its environment in the manner it imagines it is able to. Indeed, were the bees or the ants or the grasses of earth to die out tomorrow their demise would have a far more devastating effect upon the biosphere than if humans were to do the same. These life forms are so evolutionarily advanced that they have made themselves indispensable, their very existence tied to almost all other life on earth.   

    But why must so much time be spent demonstrating that humans are not at the centre of historical events? Because the principles of such an argument often run directly counter to many of the principles on which contemporary human society has been built; principles of human exceptionalism and the myth of a species that wields ultimate power over the others. Our modern economists, politicians, mathematicians, and historians have been educated within a system “designed to further the conquest of nature and the industrialisation of the planet”. In a contemporary context, the complete domination and control over the natural world is a defining concept of humanity itself and thus the notion of non-human living things affecting historical change has come to be seen as a far more radical concept than it deserves credit for. This has not always been the case however, indeed the 18th century ‘age of enlightenment’ is generally accepted as the period of time where an evolution in attitude took place between humankind and the rest of nature. In this environment of increasing urbanisation, invention, and belief in the power of man (woman coming roughly 200 years later) the academic and the populous view of nature shifted from seeing it as a force to be worked with, toward something to be worked against.

    Only over recent decades has the increasing awareness of the threat of climate change, the “historical event of our times”, begun to reintroduce narratives that place greater emphasis on the role of the natural world in affecting human affairs. This has occurred almost simultaneously in many fields including anthropology, geography, and sociology alongside history. The inherently interdisciplinary nature of ecological study has also led to vital increasing influence from the sciences on the academics of the humanities and social sciences in this sector. Donald Worster’s lamentations of 1993 that “Evolution and history remain… separate realms of discourse” are beginning to be addressed. For many years most of our historical canon has been telling us that humanity has been the controlling factor in all historical events, which has ill-prepared us for the present threats we face from outside the anthroposphere. Environmental factors have been denied importance in history in a similar fashion to how other historical minorities have been; swept under the rug in favour of tales which affirm baseless assumptions about whose history is more important than others. Environmental history is the next stage in the movement of ‘history from below’, bringing to light new interpretations of past events through the lens of a demographic that has been historically underrepresented.

    In this field of study, the question of agency in history is often brought up and has proved a complicated affair for historians to discuss. William Sewell concluded that agency “implies consciousness, intention and judgement” and is therefore “limited exclusively to humans”. Amanda Rees, conversely, criticized this view, claiming that modern notions of agency have been tied to ideas of a “rational, liberal, individual self” that omits the possibility of agency through group action or unintended consequence. Ewa Domańska respects non-human agency so far as to call for a “multi-species co-authorship” of history. The central problem, which many acknowledge, is the loose nature with which the term ‘agency’ can be defined, to the extent that much of this debate has spilled over into the realms of philosophy, where even the notion that humans possess agency has been called into question. Andria Pooley-Ebert perhaps makes the best compromise when she concludes that “giving an animal historical agency is not necessarily implying that the animal acted independently, but rather that it was an integral component in a complex relationship”. The only change needed to this statement is to include all living things under that definition, not just animals. Ultimately, however, the question of agency is less important than the question of importance, whether we place value on historical events which are not our own or otherwise.

    The famous adage which asks: ‘if a tree falls in a forest and no one is around to hear it, does it make a sound?’ is a rather presumptuous one. It assumes that we should struggle to consider whether anything really occurs outside of our own narrow perceptive field. In actuality, all we need do is consult the creatures of the forest to know it made a sound. Non-human living things are not only integral to affecting historical events from a human perspective, they are essential to the histories of their own species which exist, at various points throughout history, in either independence of, or symbiosis with, our own.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 13th of March 2019

  • Review: Children of the Sun: A History of Humanity’s Unappeasable Appetite for Energy

    Text: Children of the Sun: A History of Humanity’s Unappeasable Appetite for Energy. By Alfred W. Crosby. New York: W. W. Norton & Company, 2006. 208pp.

    The experience of reading Children of the Sun feels more akin to a fable than a history volume, even the title rings of fairy tale. As if meant to be read aloud Alfred Crosby fills his text with intriguing and often humorous accounts that serve to compliment the overall story he endeavours to tell; each chapter ends with a “coda” that sets aside historical analysis in favour of engaging narrative. Indeed, “narrative” is the correct word to describe Crosby’s survey of human history which is structured so that every section presents a problem which our protagonist, humanity, must overcome lest they fail in their mission to consume ever increasing quantities of energy. Migration into inhospitable environments results in people learning how to cook. The felling of forests requires a need to mine for coal. The hunting of whales to near-extinction for their oil requires the creation of electric bulbs, and so on. The text even pulls off a traditional literary-style ending as Crosby brings us ‘full circle’ with a discussion of how prospective nuclear fusion will replicate the prime energy source upon which humanity has relied for all its history, the sun.

    In terms of achievement in adding material to the historical literature in this field of study, Children of the Sun is light. Most of the text consists of a patchwork of previous work, some of it his own, that has been drawn together and then explained and interpreted by Crosby so as to create his narrative. However, this description does not do service to the brilliance with which Crosby has managed to so eloquently combine multiple fields of research under his one large umbrella. Physics, geology, anthropology, biology, archaeology and others find a home together in between these pages wherein Crosby dips his toe into each of these separate pools of knowledge and pulls out only the appropriate information that he needs. The use of all these disciplines collectively creates a sense of grandeur that effectively convinces the reader of the importance of Crosby’s world. The history of energy seems to pervade all aspects of our universe from guinea pigs to Phileas Fogg and the historical sections of this book have a great deal of pace that do well to match the increasing speed of humanity’s developments over time. To compliment this approach Crosby uses numbers as a technique with which dazzle his audience as the reader sees an exponential growth occurring before their eyes. In 1682 people were calling 14 waterwheels on the seine that supplied 75-124 horsepower the “eighth wonder of the world”, in 1834 steam engines were providing 33,000 horsepower in the cotton mills of Britain. In 1900 the world produced 100 million barrels of oil, in 2000 it produced 20 billion. Crosby’s story is one on which the stakes are consistently raised.

    Humanity may be the protagonist of the narrative in Children of the Sun, but it is not necessarily the hero. The human ability to innovate and invent its way out of problems throughout history is extraordinary, but there is the air of tragedy in the way all humankind’s energy problems seem to be those which it creates for itself through its own exponential demand; such a notion could also be interpreted as untempered greed. Crosby likens energy to a drug which humankind has become unable to stop taking for fear of the crash, and with every hit of increasing strength they make it harder and harder to revert to old ways of life. His final coda asks whether a 2003 blackout in New York, which caused momentary chaos throughout the city, might be a ‘premonitory vision’ of a future energy crisis. Crosby makes great effort to show the reader the fragility of our energy networks and reminds us that power on-demand is an abnormality of history, not a commonality, and there is no precedent which says that all could not collapse around us. However, it must be said that ultimately Crosby takes an optimistic approach towards his subject matter, preferring to consider solutions than lament over difficulties. Children of the Sun wants to inspire its readers, to be the handbook by the side of problem solvers of the present which enlightens them as to the history behind, and the significance of, future environmental action.

    To facilitate this optimistic approach Crosby has had to re-work concepts that he developed in his earlier works such as Ecological Imperialism. Prior to Children of the Sun Crosby had taken a pessimistic view over humanity’s influence on its own history. The argument being that the influence of forces such as bacteria, flora, fauna, and the weather have had a more defining impact on the course of history than the person, shaping human decisions which they believed were primarily their own. Children of the Sun still carries these themes (we are constantly reminded of the utter dominance of the sun in all things from fire to fission), but it develops them in a direction so as togive homo sapiens greater agency than Crosby has previously ascribed to the species. The focus now is on co-dependence between humanity and the natural world, rather than on humanity’s inferiority to it. Whether with dogs, maize, or horses (which Crosby claims humanity may have saved from extinction) humanity has managed to further exploit the energy of its age with the assistance of the natural world; these have been our protagonist’s allies in Crosby’s narrative. Another consistent theme with Crosby’s earlier works is that of the environment for change, how humanity is only ever pushed to innovate if forced to by circumstance, otherwise we are content to be ‘no more than a parochial kind of ape’. Children of the Sun’s stance on this is that the human’s genius is that it is able to create the environment for itself in which it is pushed to innovate. The unsustainable expansion and consumption of humankind forces invention.

    Sometimes narrative acts as a deterministic and negative force behind historical work but it is clear why Crosby has constructed Children of the Sun in this anecdotal style. In the tradition of big history for which the author is so well known, Crosby has written an alternative “origin story” for humanity. The simple and straightforward tone works well for communication around the kind of fundamental issues that Crosby faces in the text that would be lost in a more finnicky and academic format. In many ways Children of the Sun delves to the elementary roots at the heart of Crosby’s earlier works around ideas of consumption, destruction, and propagation and simply focusses down on energy as the central driving force at the heart of these themes. The text’s relative simplicity also serves to widen the audience to which Crosby writes and thus disseminate the moral lessons contained within this text as broadly as possible. Indeed, moral teachings have always been an aspect of Crosby’s writingsbut on a far more implicit level, with the author seeking to draw the reader’s opinion in a particular direction but never explicitly stating intent (an extremely common feature of historical work). Completely conversely, Children of the Sun explicitly proclaims its judgements as incontestable truths, simply stating in conclusion that ‘the way we live now is new, abnormal, and unsustainable’. This is a refreshing approach which leaves the reader with no ambiguity as to the author’s intent and thus allows them to make more informed decisions about their own judgements of the text and its ideas.

    However, it is unfortunately true that when Crosby turns from historian into contemporary commentator in the third and final section of his book that the pace of the earlier text falls away. The prospective and uncertain nature of future developments forces a dryer and less confident literary style which fails to enthuse in quite the same fashion as the rest of the text. Many readers may also not enjoy Crosby’s transformation into moral philosopher during this section and feel as though they are being told what to think without being allowed to make up their own minds. On the other side of this there will be readers that wish Crosby did more in this last section to insert urgency around present environmental issues; as it stands Crosby’s optimism leaves the reader concerned but not worried about the current energy conundrum, confident that human innovation will prevail in one manner or another.

    Crosby’s fable is a complex story told simply. As a yarn for modern times it weaves a narrative which keeps its reader engaged as they wait to see how humanity will overcome the next obstacle in its path, each higher than the last (although the end is somewhat anticlimactic). As a historical text it is an informative and interesting survey whose greatest achievement is to draw together multiple disciplines which so often are left apart. As a moral guidebook it is a refreshingly straightforward philosopher that looks to the future as a place of uncertainty, but also of optimism.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 20th of November 2018

  • The Death of a New World: Disease and Population Decline In South America from 1492 to 1800CE

    Preface: A Subject of Scale

    In its whole, the story of population in South America from 1492 to 1800CE is one of demographic collapse. At the low point of this period the continent would be witness to a population that reached but 1/5th the size of its pre-Columbian standard. However, taken individually, the huge variance in experience within South America during this period becomes apparent; from regions whose population figures fall and rise in dramatic peaks, to those that undulate composedly across modest crests.

    Initially however, it is important to note the vast scale of this undertaking and the limitations therefore imposed, namely a limitation to the macro. Many of the studies concentrating on South-America in this field have been of a type that focus acutely on one specific area and are successful in realising the demography of such a space in fine detail. These studies range in size from David Noble Cook’s study into Peru over a period of 100 years to Brian M. Evans’ study of an Andean Village over a period of 43 years. The view herein over all South America over 308 years will draw on studies such as these but will not attempt to replicate them in outcome. The expansive scope of this article requires it to correlate broader trends and seek to contextualise them within a continent-wide context. This is a complimentary approach to the mathematical facet of this study which operates more effectively with larger pools of data. The ultimate achievement of this article is to plot the change of the indigenous and total population of South America over this period, as a collation of other studies done in this area alongside primary census data. Moreover, it will explain to what extent disease was the primary causation for demographic change and shall provide offerings as to the variance in population decline across five distinct regions within South America (see figure 1.). Furthermore, when the term ‘disease’ is used herein it is used as a collective term for multiple afflictions; this is due to the fact that there is much dispute as to which diseases affected which populations at what times, although it is generally accepted that smallpox, measles, and typhus were the main killers with the Variola Major strand of smallpox constituting the greatest killer overall.

    The standard disclaimer must be applied here that the figures presented herein, although primarily drawn from census data of the period and believed by the author to be broadly correct, are bound inevitably to be inaccurate in many aspects. This is the challenge of applying a scientific approach to historical data. However, dealing with incomplete source material is the perpetual challenge of the historian and one that cannot be shied away from, lest no history be written at all.

    Pre-Columbian South America

    Before we begin to assess change across the continent we must first be clear in what we are assessing a change from. What was the makeup of South America in 1492?  Estimates for the overall population for the region continue to exist in dispute however some mean figures have been produced for this article, in aggregation of several other estimates made over recent years. The central influence for these pre-Columbian figures continues to be the work of William M. Denevan and his text The Native Population of the Americas in 1492 which remains an excellent source on this topic. Ultimately the total population statistic reached for pre-Columbian South America in 1492 is 20,000,000. Across this study, this total figure is broken down into five regions across South America. These are: Northern S. America, Greater Amazonia, Central Andes, Southern S. America, and Chile. The continent has been split this way partially due to geographical differences in the five regions and partially to facilitate cross-comparison over time. If we were to use boundaries that shifted over time, such as the borders of nations, our comparisons would be less accurate. These borders are approximates of regions and are not intended, nor should not be taken, as accurate boundaries.

    Figure 1. Map of Identified Regions of South America

    The regions identified in figure 1 are the geography to which the rest of this article is referenced to. Thus, the population split for South America in 1492 is as presented below.

    Figure 2. Table of Pre-Columbian Populations in South American Regions

    These demographics aside, what other features can we ascribe to these five regions that might be important in a study of post-Columbian disease spread? For this study the focus lies on significant factors that can be compared across the five regions identified. These are: climate, geography, and patterns of settlement.

    In terms of climate, for which the Köppen-Geiger climate classification system is used, our two most northerly regions of Northern S. America and Greater Amazonia can be described as ranging between a tropical (Aw) and equatorial climate (Af). These warm and wet conditions which comprise the great majority of these two regions are well suited to the spread of disease, particularly as they are prone to monsoon; we would therefore expect higher levels of mortality in these regions than in others. The Central Andes region contains a greater variance in climates due to its mountainous geography: it is mostly covered by a cold desert climate (BWk) but also contains large areas of semi-arid climate (BSk) and temperate oceanic climate (Cfb). We would expect these colder and drier conditions to be more effective at staying disease spread here. The Southern S. America region would mostly be classified as a warm oceanic climate (Cfa) with some areas of tropical (Aw) and semi-arid climates (BSh). This region can be described as susceptible to the spread of disease but not to the extent of Northern S. America and Greater Amazonia. Finally, in Chile we find areas of cold desert climate (BWk), temperate Mediterranean climate (Csb), and temperate oceanic climate (Cfb). This, in similar fashion to the Central Andes, is an area we would expect to find reduced mortality rates in due to these pathogen-hostile climatic factors.

    The geography of the continent can be split into two areas on either side of the Andean mountain range which covers the regions of the Central Andes and Chile. These mountains are a dominant factor in the lower temperatures seen in these regions as discussed above. Additionally, the mountains help to curtail the spread of disease by limiting travel and isolating groups from one another. On the eastern side of this “Andean split” in Northern S. America we can identify the Guiana highlands as a geographical feature that would act similarly. This is also true for the Brazilian highlands in Greater Amazonia and Southern S. America. However, these highland areas will prove less effective at preventing disease spread in these regions as they do not cover them in entirety, unlike with the Andes, and thus their major comprisal of large lowland areas still allows disease to spread more efficiently.

    For patterns of settlement across the continent: in clear majority we are discussing a dispersed population of peoples that do not gather into large permanent communities. This is the case for Northern S. America, Greater Amazonia, Southern S. America, and Chile. Certainly, there were areas of more concentrated populations within these regions, often along rivers and coastlines, but these were still clusters of villages rather than towns or cities. This type of isolated pattern of settlement is one which is often effective in curtailing the spread of disease, so we would expect regions with this pattern to be less susceptible to illness. In 1492 the exception to this rule was the area of the Central Andes, occupied in majority by the Inca Empire which was home to cities with populations of 700,000 or more such as Cusco, Quito, and Choquequiaro. In this case we would expect to see this area prove more susceptible to disease spread than others.

    Three Hundred Years of Disease in South America

    Population figures across this period, particularly during the first 100 years, are to be taken with a large margin of error. After 1600 the Spanish and Portuguese began taking censuses of the regions they controlled, spanning by this point almost all the continent, and so we do have primary statistics available to us that were not available for our pre-Columbian assessment. Even so, it is highly likely the numbers they give are low estimates, as even today our estimates ever increase for the number of indigenous on the continent. Nonetheless, by collating all these censuses in conjunction with our pre-Columbian estimate we can produce a graph that tracks the population of the continent over these 300 years.

    Figure 3. Graph Plotting the Population of South America from 1500 to 1800CE

    Using this data, we can also calculate the rate at which the population changes between these points, as seen in the table beneath.

    Figure 4. Table of Total Population Rise and Fall (%) in South America 1500-1800CE

    These statistics show the immediate damage done to the continent and the recovery from that. However, it must be noted that these population statistics are not solely for indigenous peoples, they include all those people who have, either by choice or by force, moved to the continent during this time. This is what explains the 155.8% increase in population from 1700 to 1800, it is comprised of immigration, we would not expect a native population to recover at this rate. The question therefore asked becomes what is the rate of native population recovery, as supposed to simple population increase overall? For this we can utilise our regions; by splitting our demographics between these five zones, including those that saw large immigration and those that didn’t, we can determine the extent to which immigration as a factor has affected the overall population statistics. 

    Figure 5. Graph Plotting the Population of Regions of South America 1500-1800CE

    For this above graph we can also produce a population change rate table.

    Figure 6. Table of Population Rise and Fall (%) in Regions of South America 1500-1800CE

    This information is enlightening in multiple aspects. From 1500 to 1600 we can immediately see that some areas have declined to far greater extent that others, namely Northern S. America, Greater Amazonia, and Southern S. America have declined at much higher rates that Chile and the Central Andes. This can be explained by the climates and geography of these regions which, as was explained above, are far more suited to preventing the spread of disease in the Central Andes and Chile than in the other regions. In Chile this can be further explained by noting that disease did not reach the region before 1561, much later than the other regions (see figure 9.). However, some Umbridge must be taken with the 98% decline figure for Greater Amazonia during this period. Of all the data drawn on for this study this figure seems the most inaccurate. Although it may well be true that mortality rates were high in this region due to its tropical climate, the 0.59 population density for this area would never allow such a drastic reduction (see figure 8.). This article would hazard that the rate of reduction would be closer to the 75% reduction seen in Northern S. America as these two regions have very similar climates and geographies. Nevertheless, in the absence of any further data the -98% figure will continue to be used.

    From 1600-1700 the notable standout is the one region which continues to decline whilst the others begin to rise in population, the Central Andes. This is likely explained by the concentrated population density in this region which allowed for disease to continue to spread virulently, as seen in figure 8. By 1600 the other four regions all have populations densities beneath 1 compared to the Central Andes which sits at 3.13. At this point it may well be argued that these other four regions have reached a point beneath the ‘minimum concentration of hosts’ threshold whereas the Central Andes has not; this means their populations are now too small when compared to the size of their landmass to facilitate further disease spread. The well-developed road system of the Incas will have also allowed continual consistent movement of peoples around the empire; further facilitating dissemination of infection. This is especially crucial if you note the long incubation period of the two largest killers, smallpox and measles, which exist in the body for c.10 days before the person infected begins to show symptoms. The further a person is enabled to travel within these 10 days the faster these illnesses can spread. We can also see within the regions whose populations do rise the difference in the rates of the population increase. This is explained by immigration, not native recovery, and will be explored in depth with the assessment of the data between 1700 and 1800.

    Figure 7. Table of Landmass Area of Regions (km2)
    Figure 8. Table of Population Density of Regions Within South America (Number of Persons Per km2) from 1500-1800CE

    From 1700-1800 we can clearly identify the regions which are experiencing outside immigration and those which are not. Again, we see a divide made between the Andean regions of Chile and the Central Andes and the rest of the country. On the west side of the Andean split we see population figures that are struggling to begin a recovery towards pre-Colombian levels, with Chile’s population becoming stagnant and seeing no increase in the 100 years between 1700 and 1800. This is representative of how the native population across the country is recovering from the impact of disease: slowly. The reasons for this are numerous; one large factor is a decrease in fertility rates after disease has ravaged a population. This can be due to the disease itself physically affecting reproductive abilities or unbalancing the gender ratios in a population but can also be a result of social grief and stress. Recovery rates are also affected in the long term because disease results in higher mortality rates in the young population, who are the ones able to reproduce.

    Figure 9. Map Depicting the Spread of Disease Across South America 1524-1561CE

    With our knowledge of the western side of the Andean split the extraordinary nature of the figures from the eastern side becomes apparent. Even the 53.3% increase seen in Northern S. America would be classed as an unprecedented event, with figures of over a 1000% increase existing in realms of fantasy. These statistics correlate well with the records kept by the Spanish and Portuguese of slaves imported during this period, which indicate c. 5,000,000 were imported into the Viceroyalties of Brazil, Rio de la Plata, and New Granada from 1700-1800. Using this data we can calculate how much of the increase in population on the east side of the Andean split is due to immigration. Taking Greater Amazonia as our example, given that it saw the greatest increase in immigration, we can take 10% as a generous figure for the increase in native population during this period. This would constitute only 30,000 of the 3,300,000-increase seen in population and means that 99.16% of the new population is imported. Using similar thought we can interpolate new data across all our existing figures produce a graph that tracks only the indigenous population statistics

    Figure 10. Graph Plottingthe Indigenous Population of South America 1500-1800CE
    Figure 11. Table ofIndigenous Population Rise and Fall (%) in South America 1500-1800CE

    Now we have calculated the decrease in native population we must ask: to what extent is disease responsible for these deaths? It is understood that it is the major factor, but how far so? Let us examine the Spanish and Portuguese maltreatment of the indigenous to see the extent of their impact.

    Figure 12. Table of Conflicts Within South America 1492-1800CE

    It is evident from this information that death from conflict may be considered a negligible factor when considering the overall indigenous population decline of South America. The single conflict with most meaningful impact on population is the Inca civil war which accounts for only 1.1% of the population decline from 1500 to 1600. The Arauco war is the cause of the most deaths but stretched over a far longer period, giving it less impact.

    As for the encomienda and mita systems, alongside other forms of cruelties brought about by the Europeans, it is unknown how many died as a direct result of these persecutions as no records were kept, not even estimates. These, evidently, were not numbers the colonisers wanted recorded. Even if we did have such data it would be difficult to extricate deaths directly caused via cruelty and those that came tangentially because of it. We may still make some assessment of their material impact however; the one undisputable fact is that these systems helped facilitate the spread of disease through multiple means. They gathered previously dispersed populations into concentrated groups, forced them to travel long distances, and worked them into a state of weakness. All of which are ample conditions to facilitate infection. In this sense their impact on population decline may have been far greater than they are given credit for here. Nonetheless the ultimate cause of death is disease in majority by a far margin, as far as our statistics show us.

    If we take our statistics from figures 10 and 11 and subtract population decline for reasons aside from disease we can produce an estimate of indigenous population decline specifically as a result of disease. We have calculated that approximately 1% of the population die as a result of warfare between our 100-year intervals, using a global average we can also calculate that approximately 1% can be accounted for by “natural causes” and accident. As discussed there are no statistics for the impact of systems such the encomienda but we must estimate they had some impact given the severity of their programmes, and have been given a 2% impact factor. Overall these account for 4% of the total indigenous population decline from 1500 to 1800. The total indigenous population decline from 1500 to 1700 (it’s lowest point) is 81.4%, so therefore the total decline as a result of disease before the population begins to rise is 77.4%. This means the total number of indigenous killed by disease from 1500 to 1700 is 15,480,000. It must also be noted for clarity that even after this point, as the population increases, indigenous peoples are still dying from disease and that some of these infections continue to plague areas of South America in the 21st century. 

    Ultimately this article has been able to track the population, indigenous and otherwise, of the South American continent from 1492 to 1800. It has provided reasoning for the variance in figures seen across the five identified regions and compared them against each other to infer further detail about the impacts of disease and other factors during this time. Although it is understood that these figures are approximates, it is nonetheless understood their significance in helping us understand this troubled period of history.

    Author/Publisher: Louis Lorenzo      

    First Published: 19th of October 2018

    Last Modified: 23rd October 2018 (Clarity)