Tag: Anthropocene

  • Children of the Anthropocene: The Implications of the “Human Age” for the History of Childhood

    Positioning the History of Childhood Within the Anthropocene Debate

    Across the historical discipline the concept of Anthropocene is being used to redefine how we interpret and describe the relationships between humanity and the rest of the natural world. The notion of a “human history” and a “natural history” existing as two separate streams of academic interest has met the Anthropocene confluence, thrusting the two together into one inseparable river. Our acceptance that ‘Humanity has initiated an environmental “phase shift”’ as Jason Kelly puts it, has opened new fields of historical enquiry into the manner in which people influence the environments of which they are a part, and the manner in which those environments influence them in turn. The ever-increasing contemporary relevance of climate change to the everyday of people’s lives has furthered this interest, and has led many historians to note the ways in which the Anthropocene does not constitute a universalising force. That, as Jason Moore writes, ‘because of existing power relationships, the ‘new reality’ will be more ‘real’ for some than for others’. Gender historians have taken a leading role in considering how the Anthropocene relates to the disparities between men and women as both contributors to, and subjects of, the consequences resulting from our new “human age”. Since Val Plumwood’s Feminism and the Mastery of Nature (1993) it has been recognised that both women and the natural world have been depoliticised, diminished, and distanced from the “standard” social order by the same dualistic enlightenment narratives. Both have been othered, defined as inferiors by an elite that has pretended ‘an illusory sense of autonomy’ rather than acknowledge its reliance upon them. The introduction of Anthropocene has evolved these arguments, shifting historians toward more wholistic understandings of an earth-system of wider interconnected dependency. As Jessica Weir notes, the Anthropocene has proven to be a useful conceptual tool in criticising prior narratives of ‘hyper-separation [that place] humans in a relation of mastery with respect to earth others and limit their capacity to respond to ecological devastation’.

    Many other historians, including those of empire, race, ethics, and materialism (to name but a few) have also adopted the language of Anthropocene. Importantly, however, none have done so wholesale and instead each new voice has provided critique and novel perspective on the conceptual framework the Anthropocene provides. Dipesh Chakrabarty and Kathleen Morrison have argued that from a postcolonial perspective the term’s use is too often western centric and shaped by a set of ‘cultural blinders [that] impede our understanding of the complex and diverse history of the earth system’. This is likely because ‘much of the discourse on the Anthropocene has been dominated by Western scientific perspectives’. Katherine Gibson and Julie Graham have put up ‘capitalocene’ for consideration as an alternate that highlights the economic system’s role in contemporary climate chaos. Donna Haraway has contended ‘cthulucene’ and ‘plantationocene’ to be non-anthropocentric descriptors that better include life forms other than humans within their remit and ‘the ways that plantation logics organize modern economies, environments, bodies, and social relations’. Ultimately however, whilst the criticisms behind these proposed alternate terms are valid, they have remained as subcategories rather than risen to prominence because of Anthropocene’s preestablished prevalence and the fact that many academics see a utility in positioning ‘mankind’s growing influence on the planet’ at the centre of the debate during the present age of threat to the planet’s existing climatic structures. ‘Saying that we live in the Anthropocene is a way of saying that we cannot avoid responsibility for the world we are making’, as Jedediah Purdy frames it.

    The possibilities of exploring the history of childhood through the lens of the Anthropocene is the area of study this essay will seek to define. Compared to histories of gender or race this relationship is an underexplored one within the historiography to date, although outside the discipline there has been greater interest in this line of enquiry. Education studies, perhaps unsurprisingly, has proven the pioneer in exploring the relationship between children, childhood, and the Anthropocene and has sought to ‘understand children and their lives as social actors enmeshed in complex social and material networks’ and to challenge the ‘presumed naturalness of childhood’, as David Blundell argues in Children’s Lives Across the Anthropocene.  However, where articles such as these are rightly concerned with what Lili-Ann Wolff calls ‘the mission of early childhood education… in the epoch of the Anthropocene’, the lacking angle of the historian is one that explores how the Anthropocene as a concept can help us understand the manners in which environments have influenced childhoods in the past and how children have influenced them in turn.

    In great majority the historical analysis that has addressed this relationship thus far has come from the works of historians of childhood and has focussed on how children’s spaces have been constructed, construed, and controlled by adult society. In the British context Matthew Thompson’s key work Lost Freedom: The Landscape of the Child and the British Post-War Settlement is a typical example of such work that argues for a post-war transition of the relationship between children and the environment tending toward a ‘loss of freedom’ and a ‘turn toward increasing protection and restriction’ due to parental fears of strangers and cars. Similarly Sian Edwards’ Youth Movements, Citizenship and the English Countryside interrogates how the concept of the rural was adopted by organisations such as the scouts as an ‘antidote’ to a problematised urban sphere. In the study of more recent history there has been an interest from historians such as Richard Louv in the denaturalisation of childhood in the “digital age”, that which mainstream media has branded ‘nature deficit disorder’ or what Ian Rotherham calls ‘cultural severance’.

    There have been fewer works on this topic from an environmental historian’s perspective, those there have been beginning with the history of the American frontier. Elliott West’s Growing Up With the Country (1989) is one foundational text of this type wherein West finds that the fundamental difference between children and their elders on the frontier was in how they related to the landscape. For adults the frontier was something new but for children it was familiar, giving them a ‘kinship’ with it that was unique. The only attempt in monograph at a work such as this since has been Pamela Riney-Kehrberg’s The Nature of Childhood: An Environmental History of Growing Up in America Since 1865 which, similarly to Lost Freedom, charts what the author frames as a declension in the quantity and quality of interaction between children and environment over time. However, as with the publications of education studies, the central perspective at the heart of all these works is one of the adult. The trend has been to ask how factors such as parenting, social policy, and architecture can influence and shape children and childhoods via changing environment, and whilst these are important questions to ask, there is a surprisingly absent space left for “child-centred” and “environment-centred” narratives. This is one thing the introduction of the Anthropocene concept to the history of childhood affords, as with histories of gender and others before it, an opportunity for historians to highlight the unique experiences and relationships of children with their environments that stem from their independent agency. Whilst we might argue with some justification that the use of an alternative term such as “adultocene” or “ageocene” would better suit our needs, it will be more useful (and indeed, simpler) to explain how stories of children and childhood can be brought forward and made distinct from the “human experience” when using the Anthropocene as a framework of understanding.

    Approaching the history of childhood with this perspective gives the historian new questions to ask of their source material and new ways of answering them. How do children, through exploration, work, and play shape their own environments? How do their wants and needs, or those perceived, influence the attitudes and actions of adults toward children’s environments? In what ways does the Anthropocene as a force and a concept uniquely affect the lives of children, and in what ways do children affect the Anthropocene? Before investigating these further however, it must be acknowledged that it would be hypocritical whilst proselytising the importance of acknowledging divergent experiences of environment not to point out that “childhood” itself is not a universal experience. Within childhood there are a multitude of identities that will to greater or lesser extents modify one’s relationship with the Anthropocene. As one example from Fikile Nxumalo’s Situating Indigenous and Black Childhoods in the Anthropocene:

    ‘school gardens for young Black children in urban schools are often positioned from deficit perspectives, as a way to bring nature to certain children who lack it. Here nature becomes entangled with anti-blackness as it is positioned as a site of potential transformation for Black children deemed at risk or lacking “normal” connections with nature’

    A similar argument could be made with regard to working class children who are perceived to be lesser for their lacking something that those of middle-class parentage typically have. As Affrica Taylor summarises: ‘The assertion that children need nature has become commonplace, but should we ask which children?’. This essay, iterating and evolving upon the existing historiography and source material pertaining to British childhoods (predominantly southern urban ones), cannot claim to speak with authority on the childhood or environmental experience. However, it is an example of how the Anthropocene can assist us in examining a history of childhood, and a pointer toward new avenues for inquiry and potentials in exploring this conceptual marriage further.

    “Presencing” Children in the Anthropocene

    Despite the present dearth of historical material that seeks to presence children in the Anthropocene, the basis for study of such a relationship is strong, as western societies have long presupposed a connection between children and the natural world deeper than that of their elders. As historians of race and gender have already demonstrated, the dichotomy built between the human and natural worlds in the past has been inextricably bound to dichotomies drawn between certain groups of people and what society considers the “normal”. Anthropocene narratives that require a radical shift in the way humanity as a species constructs its own image of self are useful challengers to these old oppositions. However, for this essay it is important to understand the enlightenment philosophies of the 18th and 19th centuries which defined (and still define) the strong perceived links between children and nature in European society today before we can appreciate how the Anthropocene concept contests them.

    In broad perspective the enlightenment birthed two competing schools of thought regarding children and environment, the rationalist, and the romanticist. The romantic movement began with works such as those of Jean-Jacques Rousseau and then William Blake, Mary Shelley, and Alfred Tennyson in reaction to the increasingly rationalism-informed industrial European world those authors inhabited. The predominantly emotional countercultural arguments they presented knotted children and the natural world together as joint symbols of hope that were not only innocent and pure in and of themselves but lent an innocence and purity to one another. Their relationship was symbiotic, children were a part of the natural world in a way that adults were not, and could not, be. As Rousseau argues in On Education (1763), childhood should be defined by teaching that is ‘beyond our control’ and follows ‘the goal of nature’. William Blake’s “Nurse’s Song” in Songs of Innocence and of Experience (1794) similarly conjures images of an ideal youth as natural and pure:

    ‘When the voices of children are heard on the green

    And laughing is heard on the hill,

    My heart is at rest within my breast

    And every thing else is still’

    On the other, more instrumentally influential, side of this debate were the rationalist thinkers such as René Descartes, Immanuel Kant, and John Locke. Their lines of philosophy, just as present in the 21st century as those of the romanticists, saw nature as a force of reason, the basis of order and reason, even. However, where romanticists sought to learn from a nature they saw as inscrutable, the rationalists desired to understand and control a nature they considered as being fundamentally conquerable. John Locke’s An Essay Concerning Human Understanding (1689) exemplifies this approach with his famous description of children as being ‘tabula rasa’, a blank slate. In Locke’s view the child, like nature, would become wild if not carefully managed, but with the correct instruction they could be formed into an instrument of reason. Whilst it is tempting to reduce this tension between romanticism and rationalism to “nature versus nurture”, as William Cronon and Thomas Dunlap warn in Faith in Nature, both ideologies incorporated aspects of the other and looked to the natural world as a unifying, present force that could replace a role in society that had traditionally been fulfilled by religion. As Sara Maitland writes in Gossip From the Forest: ‘wilderness finds its complement and counterpart either in conceptions of childhood moral innocence or the child as tabula rasa’. Most importantly for this essay however, both drew a line between the natural and human worlds, although for different purposes, and both identified children as beings who could permeate that boundary to some extent.

    In the contemporary context these two philosophies still carry weight in how humanity responds to the environmental consequences of the Anthropocene in regard to children. Romanticism has come to play an important role in many environmentalist movements, best exemplified by the rise of Greta Thunberg and the global youth climate strike movement that, whilst being symbolic of the agency of the child, are also caught up in ‘environmental stewardship discourses that position certain children as future saviours of nature’. The same can be said for the rationalist perspective prevailing in technocratic circles who see environmental issues as ‘merely a physical Earth problem, and not an ethical one’ and view the young as saviours of the status quo via theorised future innovations and ‘the promise of one more generation’. These forms of romanticism and rationalism are dangerous as they detach the adult world from any responsibility or agency in addressing climate concerns.

    The introduction of the Anthropocene concept uproots this rationalism/romanticism dynamic. Far more than being a simple obstacle for humanity to solve, the reality of the Anthropocene has implications that have ‘the potential to challenge conventional ways of seeing those constructions of nature found at the heart of Enlightenment modernity and confront its contradictory positions’. Its very existence is proof an implicit and deep connection between humanity and the natural world that sets aside the notion that they are antitheses of one another. Indeed, it highlights how much they are the same. This realisation deromanticizes the natural and childhood worlds, unlocking them from the fairy tale, almost orientalist perspective with which they have been perceived. Free from this timelessness they can be considered more as active agents of change that have the ability to play important roles within the global network of factors that has brought about the birth of the Anthropocene. In the new “human age” where humanity has come to be seen as the integral operator in the earth’s “natural” systems, what the romanticists posed as a force opposite to that of nature has since become it. At the same time the Anthropocene undermines the empiricist perspective by showing that whilst humanity has the capability to influence the natural world, they cannot remove themselves from their relationship with it. Having made themselves more integral to the earth’s ecosystems than ever, humans are more at risk than ever when those systems change as a result of their actions. If humanity as a species was truly incontrolof the natural world, it would not have chosen to create the Anthropocene.

    In other terms, if we accept the idea of Anthropocene, we must accept that the “human” identity as constructed must be one that incorporates itself into a wider view of nature and the planet as part of one earth-system. Therefore, the natural world cannot be construed as “other” through either romanticism or rationalism. This view of an ‘earth-system’ is one that is important for the history of childhood as it understands the planet as a ‘unified, complex, evolving system beyond the sum of its parts’, which is a view that presences and places importance on historical actors that have otherwise been deemed negligible.

    The Becoming World

    Understanding how the Anthropocene highlights environmental and ideological faults that underly contemporary and historical perceptions of child and nature will allow historians to construct revised narratives of childhood and environment alike. This means acknowledging and exploring how children act upon their environments both personally and extensionally through the ‘nature-culture hybrids’ of the societies they inhabit. Interdisciplinarity will be key to unlocking such stories as the chimera of the Anthropocene requires the expertise of geographers, biologists, earth-system scientists, amongst many others, to fully interpret. At the same time, as Andreas Malm and Alf Hornborg point out in The Geology of Mankind?, such professionals are not necessarily knowledgeable in the study of human relations with the planet, ‘the  composition  of  a  rock  or  the  pattern  of  a  jet  stream  being  rather different from such phenomena as world-views, property and power’. Only studies that take an interdisciplinary approach will have the capability to understand what childhood means in the Anthropocene in both environmental and humanist senses.

    Adopting methodologies of new materialism has proved one of the most popular styles of exploration in this category, much of the work so far for which has come from anthropology. Away from the conceptual, the Anthropocene Epoch reminds us perhaps first and foremost of the tactile, how much the human relationship with the planet is one based fundamentally on physicality before philosophy. Where children are concerned, their physical interactions with their environments are based on specific wants that substantially differ to those of their elders, most evidently the wants of play and exploration. Children are therefore inhabitants and engineers of unique environments and relate to “adult” spaces in unconventional ways; they ‘interactively embody their surroundings through play’ as Kirsti Pederson Gurholt describes in Curious Play. This includes particular interest, born of novelty and of these divergent wants, in aspects of the natural world that adults show less toward. An attraction to death and dead animals is one common current that runs through materialist analyses of childhood, from Pamela Riney-Kehrberg’s description of the bouncy-castle horse carcasses of New York’s city streets to Eduardo Kohn’s accounts of the spoils brought home from the hunts of the Quechua people that offered little interest to the adults but garnered much from the children. The presentation of unidealized accounts that genuinely examine children’s relationships with the material world, rather than those that others have conceived for them, works to undermine misleading enlightenment conceptions of childhood that would have them repulsed and disconnected from the “unnatural”. In the Anthropocene, where the objects and materials humans accumulate and throw away have come to be powerful agents of environmental transformation, we are required to challenge  ‘deeply rooted cultural oppositions such as animate versus inanimate and active versus passive’ that ignore the materiality of the planet and of children’s lives.

    Beyond objects and materials, the landscapes of childhood are equally important to recognise as ‘inherently pedagogical contact zone[s]’, meaning a recognition that all environments are environments of learning. Young people are often drawn to the abandoned and the secretive over the idyllic, spaces such as a den or old factory where you can “make your own fun” proving to be more intriguing propositions than deliberately constructed environments such as playgrounds or youth centres. These spaces that children choose to inhabit, which generally fall outside of the “adult world”, allow them greater freedom and a more authentic pedagogical relationship; such relationships that will go on to be instrumental in their adult attitudes towards particular environs. As Gibson and Graham write in A Feminist Project for Belonging in the Anthropocene:‘The Anthropocene calls to us to recognize that we are all participants in the ‘becoming world’, where everything is interconnected and learning happens in a stumbling, trial and error sort of way’. The “human age” asks us not only to consider what the environments of childhood can teach us about environment and childhood, but also what they can teach to each other. Indeed, it asks us to reframe and presence spaces of childhood in the historiography that have been deemed before as “abandoned” or ahistorical. 

    If we embrace the spatiality of children’s environments we gain appreciation of children (and the natural world) as ‘social actors who are enmeshed in richly diverse social worlds’ rather than ‘separated out, disconnected individuals understood solely through developmental needs and discourses of rights’. As a global phenomenon the Anthropocene touches all human lives to greater and lesser extents and does not do so proportionately toward those groups of people who have most influence over it, thus drawing distinction between “children’s environments” and “adult environments” as separate entities is unhelpful. If the Anthropocene does not confine itself to the adult domain and we cannot confine our studies so either. It pushes us to consider the construction of our built and landscaped environments more carefully, with greater sensitivity to how children will know, sense, touch, and exist in them. As Karen Malone concludes in Children in the Anthropocene, only with an appreciation of spatiality can we ‘acknowledge how it is to be child with a host of others and the potential differences… their ‘acting’ as an ecological collective can have on the ecosystems of the planet’.

    However, the relationship between environment and child as an element of wider earth-systems extends beyond the material. All historical agents act extensionally upon their environments through how other agents act toward and around them, and this holds especially true for agents such as children and the natural world that are perceived in wider society to lack agency for themselves. Holding to enlightenment form, whether construed as ‘wayward, chaotic and disordered’ or ‘pure, innocent, and in need of protection’, there is a sense of need or even duty to act for them rather than with them. In the Anthropocene, the concept of what childhood means and has meant is changing.  Childhood is seen as being under threat in a way that other human life-stages cannot be, the perceived symbiosis between child and nature being so strong that the threat of Anthropogenic climate change to the natural world is naturally a threat to childhood also through cultural severance. At the same time children are cast as the saviours of the planet and symbols of environment(alism) in a process that Peter Hopkins and Rachel Pain call ‘fetishising the margins’. The natural world is simultaneously vulnerable and dangerous, especially for children who are ‘inherently more sensitive’ to its hazards in both physical and psychological contexts. The Anthropocene invites us to consider multiplicities within childhood and environment that previously were singularities. As Alan Prout writes:

    ‘the singular universal and naturalised category of childhood [should] be replaced by childhoods understood as dynamically configured, diverse and entangled assemblages of natural, cultural and technological elements’

    The study of childhood brings a focus to Anthropocene studies of smaller more intricate environments, where the tendency is often toward grander overarching histories of ecosystems and global networks. It asks us to consider how environment is presented to children, what narratives are taught through our stories and schooling about the natural world and how do those influence us in adulthood. The Anthropocene also asks for a reappraisal of the narratives that adults tell themselves about childhood and environment, particularly those of nostalgia that idealise or demonise certain types of youth, as these reflect ‘anxieties about social and economic change and its impact on the child, and the individual sense of identity and belonging, present in everyday life.’ The study of how children’s lives are changing in the Anthropocene era is an important undertaking, but the conceptual framework this provides can also be used to study the history of childhood, and tell new stories that presence the child and their environments on their own terms.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 2nd of December 2020


  • Anthropocene Ecologies of Nutrition and Economy

    The Ethiopian drought of 1983 to 1985, that lead to a famine which left 1.2 million people dead and 2.5 million displaced, was not an act of God. This time something was different, or something was different in the way people were beginning to view what environmental disasters meant in the “modern age”. More specifically, what they meant in an age where the activities of humanity were becoming increasingly inseparable from those of the rest of the natural world. Questions were being raised as to why, as James Verdin et al. ask in Climate Science and Famine Early Warning, this drought had come so quickly off the back of several other droughts the region had endured in recent years, out of step with existing cycles of Ethiopian climate that typically saw drought once a decade? Droughts occur when high temperatures increase the rate of ‘evapotranspiration’, where water is lost from soil and the flora it supports. This also leads to wildfires, such as those seen in Australia in 1966, 1993, and 2019. Changes in temperatures also affect rates of rainfall, as they influence air and ocean currents making dry areas drier and wet ones wetter, meaning plants that would traditionally grow in that region will no longer do so. As food is an integral element of culture, loss of these traditional foodstuffs then damages affected peoples on a societal as well as an economic level.

    Contemporary climate scientists suspected that the Ethiopian droughts were not “natural” in origin, at least in the traditional sense of the word. Instead, they later proved that they were a consequence of a tropical rain belt that globally had been consistently pushing southwards over the course of the latter half of the 20th century, leading to decreased rainfall all across the southern Sahara. Furthermore, the reason for this shift southwards was directly attributable to human activity. In an article for Geophysical Research Letters in 2013 Yen-Ting Hwang et al. explain how the release of sulphates into the atmosphere via the burning of coal in Europe and North America had been the ‘primary cause’ for the ‘aerosol cooling of the Northern Hemisphere’ and the subsequent change in global weather patterns.

    Since 1985, an increasing number of “natural” disasters have been ascribed, at least in part, to human environmental influence and as a result historians have increasingly been looking backwards with a more critical eye toward connections that could be drawn between what previously has been viewed as the separate disciplines of “natural history” and “human history”. Amongst this discussion, one term has come to embody this newly perceived relationship, Anthropocene; the age where, as Paul Crutzen remarked on the turn of the millennium, ‘the global effects of human activities have become clearly noticeable’. In this new state of affairs the human had taken position as an integral element of the planet’s climatic, aquatic, and other earth systems. But how has the “human age” degraded nutritional and economic ecologies? And why did humans create the system of global, industrial, capitalistic agriculture that is primarily responsible for it?

    Ecosystems of Ideology

    Understanding ideology, poverty, and famine as biocultural aspects of the same system is essential in the Anthropocene. When pressure is applied to one part of the system, the others will be affected, be those pressures economic, psychological, or nutritional. In 1993 India saw flooding that killed 530 people and destroyed 1.2 million acres of crops and other flora. To understand this, an approach is required that highlights three main factors of environment: the biotic, the abiotic, and the cultural. The biotic factors are those that include the biology and health of humans, but also those of the other life forms that occupy the same environment. Abiotic factors involve the geography and climate of the environment, and cultural factors are manufactured anthropogenic elements that include ‘such phenomena as world-views, property and power’. Understanding each of these factors in any given environment requires understanding of the other two. In other words, they must all be seen as variables of the ecosystem. In this case, increasing pollution in the atmosphere was responsible for creating a more intense greenhouse effect that lead to greater oceanic evaporation and more rainfall, followed by these floods that left millions of people homeless. The flooding had also caused soil erosion that further impacted agricultural productivity during recovery and therefore increased nutritional stress. In this way the cultural (or ideological) drive of industrialisation in the market-economy acted as a force of oppression via abiotic and then biotic factors.

    Often such relationships are non-obvious, such as in how the mineral content of a community’s water will be based on the composition of the rock into which they have dug wells. Those with less natural fluoride available in their water, such as the people of Huila province in Columbia, for example, will have higher rates of tooth decay and thus more nutritional complications. On the other side, those with too much fluoride can suffer ‘enamel fluorosis’, impairing tooth development when young. Such environmental inhibitors can range from the relatively insignificant, as with the fluoride content of water, to far more serious, as with life expectancies in northern china reducing by 5 ½ years compared with the south from 1950 onwards. Yuyu Chen found that because of a cooler climate, air pollution in northern China had become considerably higher than in the south, due to the burning of more coal, and this is by a large margin the deciding factor in the increased rate of cardiorespiratory deaths in the region.

    However, whilst the recent Anthropocene has seen a greater prevalence of human-induced environmental change, the scientific consensus is that humans have been contributing to the warming of the planet for at least a century, and historians note that humanity has been shaping the planet’s ecosystems long before that. As Robert Goodland and Jeff Ahang explain in Livestock and Climate Change, measurements of rising undersea and atmospheric temperatures that fall far outside of what could be considered normal variability, are solid evidence of the impact that industrial agriculture and power production has had in producing global warming. This phenomenon has polarized the earth’s climatic zones, shuffling them north in the northern hemisphere and south in the southern hemisphere, leading there to be the most dangerous changes in areas that were already “on the edges” of these zones. These areas most affected, being those of more extreme conditions, are typically those where marginalised peoples, flora, and fauna exist who have more specialist lifestyle requirements, making these changes all the more devastating. This is why, according to the UN, 99% of deaths attributable to climate change have occurred in developing nations. As David Ciplet et al.explain in Power in a warming world, the advent of the Anthropocene has only increased the ways social inequality and poverty translates into poor health, through factors such as increases in stress hormones, exposure to dangerous toxins, and diminished access to healthcare. Those living in poverty are more likely to live near toxic sites, such as residents living alongside the oil fields of the Niger delta, 60% of which say their health is being affected by air, water, and land pollution. As fossil fuels become scarcer, methods of extracting them become more inefficient in terms of both energy use but also other resources such as water. Techniques such as hydraulic fracturing need lots of fresh water to operate, and pollute local water cycles, thus competing for the valuable resource with growing human populations.

    Anthropocene Ideology

    For historians, “Anthropocene” is not only the recent epoch however, but a framework of methodology, studying the history of environmental eventsin the context of Anthropocene. To this end, the material culture of civilisations is an essential source. For example we can use palaeopathological methods to look at the skeletal structure of Mayans from the 6th to 10th centuries, as William Haviland did in the 1960s, and note that people were growing shorter over time. Simultaneously we see fewer animal bones in the record, indicating a reduction in food availability. At the same time, however, those skeletons of the elite that were entombed did not change in size, showing us how the nutritional stress on the population had both environmental and societal influences. This historical practice of ‘medical ecology’ is still an emerging discipline and requires historians not only to appreciate which environmental factors affect a person’s health, but how those consequences then tie back into the earth-systems they inhabit. Famine and poverty exist in cycles, or spirals, that create the conditions for their own continued deterioration.

    Anthropogenic climate change has contributed to the cycle of poverty by putting excess stress on individuals and communities that did not have the resources, be those economic, political, agrarian or otherwise, to cope with the change. For example, as human settlements were increasingly built, or climatically moved into, areas of the world that previously were too hot to be hospitable in the 20th century, such as in Saudi Arabia, more and more people moved into housing that was designed explicitly with air conditioning in mind. However, problems arose for people living in poverty who could not afford this “convenience” and had their economic and physical health threatened as a result of what is called ‘cooling poverty’. To use Nancy Romero-Daza’s term for describing the relationship between violence, drug abuse, prostitution, and HIV/AIDS, the relationship between ideology, poverty, famine, and environment is syndemic. They are ‘not simply concurrent problems, but rather constitute a set of mutually reinforcing interconnected epidemics’. For example, when in 1998 the Nipah virus broke out in Malaysia, all those who suffered it were not Malaysian but Chinese. This was not because the Chinese were biologically vulnerable compared to the rest of the population, but because they were the people who provided cheap labour on the pig farms where the disease originated. Given this context, an even more appropriate term to describe the relationship between poverty, famine, and ecological degradation in the Anthropocene would be ecosyndemic, an idea that places emphasis on environmental factors in the creation of poverty and famine in the 20th and 21st centuries.

    Experiences of poverty and famine are ideological, in that they exist within a set of power relationships between those experiencing them, and those not. As Ann McElroy and Patricia Townend describe in Medical Anthropology in Ecological Perspective: ‘Sicknessis a social category – the sick role in a particular society, the way a person who is ill is expected to behave’. Looking at poverty and famine in the Anthropocene, these relationships are more complex. If the whole world is “sick” in the Anthropocene, who plays the doctor? With the 1983 Ethiopian drought, it was the nations that had caused the disaster to begin with, inadvertently or otherwise, who acted as healers by providing financial support through the “live aid” event. In these western nations, mid-20th century optimism, particularly in America, was typified by an ideological belief that technological advances in nuclear energy, antibiotics, and agriculture with the “green revolution” would be able to create a world without poverty, disease, or hunger. Such ideas were strongly rationalistic, based on presumptions of humans being able to control their environment through technology; but the famines, epidemics, and other “natural” disasters of the late 20th and early 21st centuries speak otherwise. The environmental consequences of Anthropocene ideologies have served to undermine them and have brought forward new arguments that suggest that poverty and famine can only be addressed when working within the frameworks of the natural world rather than over them.

    Looking back into our history, the Anthropocene conceptual framework pushes us to question where human decision in relation to the environment played a role in causing poverty and famine. Peoples who chose to transition from hunter-gatherer into settled agricultural societies, for example, created an environment of living that caused new types of disease to evolve from closer contact with animals. Clearing land made new breeding places for mosquitoes and digging irrigation ditches more homes for the tiny parasitic worms that cause bilharzia and therefore anaemia. As Mark Cohen and Gillian Crane-Kramer write in Ancient Health, skeletal records of early agricultural societies show ‘increased nutritional stress’ compared to their hunter-gatherer contemporaries who had a more varied and flexible diet and were thus less susceptible to famine. Foragers, having diets of higher energy and greater variety, were also far more unlikely to develop weaknesses in their nutrition, unlike settled agricultural peoples. With the emergence of the city environment in the historical record, Cohen and Crane Kramer also note how cultural differences between peoples begin to play more important roles in their lifestyles. Indeed, the general trend identified in societies from the advent of agriculture to the modern day has been an increase in social stratification related to changing economic conditions based on environmental factors. Cases of poverty and famine in the Anthropocene follow this trend, being most acutely felt by specific groups of people on lower levels of social strata. Such experiences in a society become less universal and more variable based on a person’s specific place in the nature-culture order.

    The great Chinese famine of the late 1950s and early 1960s, as Dali Yang notes, is one example of famine that is considered to be predominantly of “man-made” causes, as in being the result of ideological policy decisions that caused falls in food production. In this example the connection is clear between human decision and human suffering, but because in the Anthropocene poverty and famine are often more abstracted from their ideological causes via environment, it is harder to conceive of and draw lines of causation. Indeed, the Anthropocene leaves little room for policy makers to blame “natural disasters” for crises, as the Chinese establishment did. What makes this more difficult is that the decisions behind anthropogenic climate change which have contributed to famine and poverty in modern history were generally made in countries different from those which were harmed. As René Dubos wrote in Science and Man’s Nature, often the most widespread impacts of ecological stresses are not the events themselves, but the social organisation and behavioural traits of the societies that surround them.

    Minority Ecologies of the Anthropocene

    Whilst peoples who live outside, or even tangential to, globalised consumerist economies make up a fraction of the earth’s human population, they are responsible for managing c.25% of the world’s tropical forests. For these people, even if the Anthropocene does not bring full famine or poverty, it can still bring aspects of those things. Human bodies require a complex array of nutrients to operate efficiently, which are obtained in different ways with different cultures around the world. For example, because of the environment of the South American rainforests, where the tropical climate results in acidic soil and therefore the plant life being low in nutritional value, there isn’t an easy supply of animals available to provide protein for a person’s diet. Therefore in many indigenous diets where meat has been uncommon, protein was instead obtained by a mix of maize and beans, that complement each other when eaten together. In the 1960s these two staples began to be upset by consumerist economies of the Anthropocene that resulted in less healthy and nutritious diets, if not hunger. This pattern also holds true for the peoples of the Kalahari Desert where there has been a steady decrease in food obtained from hunting and foraging over time, being replaced with imported commercial products such as cornmeal and sugar. Where at the start of the 20th century almost all of their nutrition came from wild sources, by 1980 this figure was down to 20%. This came with benefits and disadvantages, such as in seeing an increase in nutritional deficiencies but also a decrease in infant mortality due to a greater availability of milk. This is exemplary of how, in regards to food production, the Anthropocene influences not only the production of food in different environments, but also the way it distributed throughout a given economy and the way it is culturally viewed and prepared.

    Hunter/gatherer societies’ food sources are not only put under threat from climate change, however. Indeed, according to the International Union for Conservation of Nature (IUCN), habitat loss has been the primary reason behind extinctions and endangerments of mammals, birds, and amphibians around the world, and whilst climate change is a contributor to habitat loss, most of this has been the result of more physically immediate human action through the remodelling of landscapes for agricultural land. This decrease in habitat size has thus reduced the number of plants and animals from which to gather.

    In contrast to foragers, people worldwide who live as subsistence farmers (approximately 75% of the world’s 1.2 billion poor) have diets that depend on one specific staple. In temperate European and Asian climates, this has been wheat. For Africa millet or sorghum, south and Southeast Asia rice, and maize for the Americas. Economies built in this style are able to produce more food than hunter/gatherers due to the use of more intensive agricultural techniques but are bound to that single staple’s limitations. Such peoples have been more vulnerable to a lack of certain vitamins or minerals, and more likely to have seasonal hunger incorporated into their farming routines before a harvest.  This means that should their resources fall prey to blight, drought, or another limiter, the entire system collapses, leading to famine. This occurred in Kenya in the 1980s when economic pressures of Anthropocene markets caused an increase in poverty and malnutrition because pastoralists were not able to sell their produce at sustainable  prices. In the Anthropocene, this is becoming more common, and so this style of subsistence is becoming less viable, as with hunter/gathering. Traditional cuisines around the world have been thus put under threat in the Anthropocene via ecological and also economic contributors. Local foodstuffs are highly cultural and so this represents societal damage, but they are also the ends of a process of adaptive selection over time whereby different culinary combinations have been trialled and those best suited to the environment chosen. Changing these diets without consideration to local particularities, therefore, is more likely to be a force of harm than help.

    The Anthropocene has created the environmental conditions which make methods of subsistence other than that which birthed it, global industrial commercialised production, more difficult to maintain. Via the limitation of wild spaces for foraging through habitat destruction, and the increasing of “natural” disasters that decimate traditional farming techniques, those older forms of food acquisition are attacked and diminished. The reason for this is that the majority of the energy utilised in industrial agriculture does not come from human labour, only around 5% compared to its alternatives that use around 90%. Instead it draws on energy sources such as coal and oil that produce high amounts of pollution. Industrial agriculture is vulnerable to the same threats as subsistence agriculture, namely the cultivation of monocultures that are susceptible to disease, but this is remedied with the use of pesticides which in turn harm the health of the humans and ecosystems they come into contact with; for example when pesticides of the neonicotinoid group caused ‘colony collapse disorder’ in bees in Massachusetts in 2013; this in turn damaged the whole ecosystem due to the loss of pollinators, including hurting the yields of the farmers who use those chemicals. Furthermore, such crops have less nutritional content overall compared to crops grown without the use of pesticides or glyphosate herbicides. Indeed, the use of these fertilisers ultimately degrades the soil and water resources of the ecosystem to such an extent that it becomes unsustainable, especially in an environment of increased drought, and contributes to the growth of cyanobacteria via runoff into bodies of water. This bacteria flourishes in excesses of nitrogen and other chemicals that fertiliser provides and in turn harms the health of aquatic life and humans who drink the water, eat products of that water, or use it for recreation.

    Anthropocene Societies

    The nutritional and economic stresses of the Anthropocene are not only found in traditional agricultural and foraging environments however, although these are the most drastically affected. In America, for example, at least 15% of households suffer from nutritional stress and food insecurity despite availability of charities and government programs. It is also the case that these same households are prone to obesity, because both conditions relate to a lack of healthy food in a person’s diet, rather than a lack of food overall. Those societies most responsible for creating the Anthropocene, the same as those who have profited on it, have seen a shift toward meat as the prime source of protein in their diets, and therefore increased intakes of animal fats. Alongside this, people living on ‘supermarket diets’ see increased intakes of high-glucose and fructose foodstuffs which are associated with obesity and several chronic illnesses including diabetes, various autoimmune diseases, and kidney inflammation.

    Industrial meat production chains that have used antibiotics on animals in order to keep them in closer quarters have built immunity in the pathogens they were targeting, thus making those illnesses harder to treat in human populations. Additionally, estrogenic chemicals used in the production of plastic packaging in industrial food networks, and those used on animals and crops, are suggested to promote the growth of fat cells in the body. These chemicals disrupt a human or other animal’s endocrine system, meaning various hormones that are used to regulate fat growth can be confused. It is also likely that such defections can be passed down genetically to children of parents who are exposed to these chemicals pre or postnatally. At the same time, the inspections process of fresh produce has not been able to keep up with production or global distribution, and so foodstuffs contaminated either by bacteria or pesticide have been more likely to reach cooking pots. This has made the task of tracing contaminants back to their source more difficult.

    As Anna Bellisari describes the phenomenon in The Obesity Epidemic, obesity is: ‘the predictable outcome of the highly evolved human metabolic system functioning in an obesogenic environment created to fulfil the American Dream’. This highlights how the cultural aspect of an environment can fundamentally impact the biotic and abiotic. Supermarket societies of the 20th and 21st centuries that have seen obesity epidemics have done so primarily because their economic and political structures created a gastronomic environment wherein unhealthy foodstuffs were promoted because they were more lucrative than healthier alternatives. Additionally, the promotion of the idea that a person’s fatness is based wholly on individual choice as supposed to also being influenced by structural elements of the environments of nutrition and health that they live in, has allowed the continuation of this structure and the growth of other industries surrounding diet and fitness.

    The increased yields typical of industrial agriculture also sometimes translate to less or worse nutrition for people. In central American nations, where the production of products such as beef increased dramatically during the 1960s, the consumption of beef in that area of the world has went down, ranchers themselves having to ‘pay more for less’ as the food was exported to other markets around the world. These globalised markets of industrial societies have also widened the scope of the impacts of food crises such as droughts, particularly for those living in poverty. A poor harvest in one key area of the world can have devastating impacts for the nutritional health of people the world over, such as with the millennium drought in Australia (1996 to 2010) that caused a spike in grain prices and led to deaths and protests in over 50 nations around the world.

    Recent history has challenged historians to change the ways they think about phenomena such as ideology, poverty, and famine. The Anthropocene’s ecosyndemic systems of nutrition and economy can only be grappled with a holistic understanding of each existing as nodes in a network of causality that is often non-obvious and multidirectional.

    Author/Publisher: Louis Lorenzo

    Date of Publication: 4th of August 2020