Tag: Political

  • Chapter 1: Changing Childhoods and Changing Britain: Expert Discourse and the Universal Child – The Natural Habitat of Youth?

    <- IntroductionChapter 2 ->

    1. 1.1 Introduction
    2. 1.2 The Urbanised Landscape: Constructions of the Urban as Anti-Child
      1. 1.2.1 Driven Out; Driven In: The Rise of the Car
      2. 1.2.2 Places without Play? Making Space for the ‘Normal’ Children
    3. 1.3 Safeguarding, Crime, and Children: Challenging Traditional Expertise
    4. 1.4 Technology: Exploring an Unknown Environment
    5. 1.5 Conclusion
    6. References

    1.1 Introduction

    This chapter looks at how expert discourse both in policymaking and academic circles sought to understand and shape children’s lived experiences of place and play in Britain leading up to and during the 1980s, 1990s, and 2000s, bringing different categories of discourse into dialogue. The framework of rules that experts built up attempted to define where children could be and what they could do there, as well as the physical makeup of those places and parental and societal attitudes towards them. Here this chapter will apply du Gay et al.’s analytical framework of the ‘circuit of culture’ which identifies five key aspects to understand when analysing cultural texts: representation, identity, production, consumption, and regulation.1 In particular, Mora et al.’s connection of the circuit to the study of material culture allows this chapter to examine experts’ changing approaches to childhood environments as cultural artefacts to be understood through the circuit’s five key aspects.2 This methodology reveals how the changing attitudes of experts in planning and policymaking circles during the 1980s, 1990s, and 2000s translated into material changes in the character and policing of the environments of British childhoods. Evolving from the ‘child centred’ pedagogy of the post-war era, the late 20th century saw the continued development of discourses that conceived of ‘a universalist model of childhood vulnerability, characterised around an ageless, classless, genderless “child”’.3 At the same time public discourse (discussed in Chapter Two) was beginning to challenge traditional sources of expertise and forward child-led experiential methods, which sometimes were incorporated and sometimes rejected by planners and policymakers. Many academics during this period – in conversation with policymakers and planners but also apart – were also in tension with these traditional sources, forwarding an alternative vision of childhood that was less safety-conscious and more freedom-conscious.

    During the last three decades of the twentieth century in Britain there were three main expert discourses surrounding children and their environments, each concerned with a factor that was perceived to threaten or degrade those environments. As will also become evident in Chapter Two’s assessment of the impact of media discourse, each of these points of concern reflected the moral panics that were coming to surround constructions of childhood. The first threat was the urbanisation of the landscape, more specifically the increasing dominance of cars and roads. The second was fear over dangerous ‘strangers’ on the streets, and also dangerous ‘youth’ from the 1990s onwards. The third was technologies like the TV, games console, mobile phone, and internet. All three subjects captured the attention of experts, but professional opinion was divided. Whilst almost all experts agreed on what these ‘new’ threats to childhood constituted, policymakers and academics were quickly split on proposed solutions. In general, policymakers approached problems from a health-and-safety perspective that led to greater regulation of children’s independence and mobility in response to these threats, as can be seen in the reports, white papers, speeches, and bills produced by the government and civil service during this period. Conversely, academics addressed the same problems from a freedom-and-agency perspective, consistently arguing for less regulation of children and more regulation toward creating child-friendly public spaces whilst also studying the impacts (health and otherwise) of changing policy.

    These competing views interacted during a period when public as well as expert attitudes towards the two realms of the ‘outdoors’ and ‘indoors’ were changing. The threats of urbanism and strangers were said to be pushing children away from the outdoors but it also appeared that technology that was pulling them in. The indoor environment was predictable and safe, yet it was also unhealthy and lazy. The outdoors was unpredictable and dangerous, yet also active and exploratory. A set of social stereotypes was perpetuated and accompanied these representations: indoors was for girls, outdoors for boys. Indoors for young, outdoors for old. Furthermore, because these ideas were about the environment, they were contingent not only on a family’s personal wealth and social class, but that of the community they lived in.

    Chapters Three and Four of this thesis will look at the attitudes and actions of children themselves, those most directly affected by the policy and cultural outcomes of this expert discourse, in two specific North East communities. The subject that most impacted them, I argue, was the one that most drastically altered the physical environment: the expansion of urban, car-based, landscapes. This had a very significant impact on where it was deemed appropriate for children to be, and experts played a fundamental role in both implementing and condemning these changes.

    1.2 The Urbanised Landscape: Constructions of the Urban as Anti-Child

    1.2.1 Driven Out; Driven In: The Rise of the Car

    Following the post-war baby boom, the volume of vehicles on Britain’s roads increased as dramatically as did its population. The Preston Bypass, Britain’s first motorway, opened in 1958, and in 1963 the Ministry of Transport’s Traffic in Towns report forecast that the car would soon be taken ‘as much for granted as an overcoat’, even though 70% of households did not yet own one.4 Michael Dower’s 1965 ‘Fourth Wave’ report was also influential on experts in the field, tying the idea of a new ‘leisure oriented existence’ for the British citizenry to car-oriented infrastructure.5 The car was seen not only as inevitable but essential to a modern economy, and in the post-war decades rates of car ownership increased to the point that 1981 marked the first time that more British households owned a car than did not.6 Furthermore, the 1980s itself saw a 50% increase in the volume of vehicles on Britain’s roads, more than had been seen in any decade prior, or has occurred in any decade since.7 In 2011, the Department for Transport (DfT) estimated that whereas there were approximately 12 million vehicles in Britain in 1970, in 2010 there were 34 million, the 1980s being the decade that provided the largest increase.8

    In government, this expansion was embraced with a pro-road agenda to serve what Margaret Thatcher called ‘the great car economy’.9 The 1989 Roads for Prosperity white paper, and lesser-known follow-up Trunk Roads, England into the 1990s detailed the plan to embark upon what the government touted as ‘the biggest road-building programme since the Romans’ based upon a predicted 142% increase in traffic by 2025.10 The 500 road schemes proposed were to cost £23bn, and as a result of the government’s enthusiasm and funding, 24,000 miles of new road was built between 1985-1995.11 However, during the 1990s it became increasingly clear that the 142% increase prediction that the Roads for Prosperity programme was built upon was wildly overestimated, the real figure re-estimated to be closer to just 40%, and as such John Major’s Conservative government cut spending on roads significantly during its last years in power.12 Furthermore, the road-building agenda had sparked a significant number of protest groups around the country against works in their local areas, alongside national protest organisations like Alarm UK!, formed in 1991 in direct response to Roads for Prosperity.13 The construction of the M3 through Twyford Down in 1992 catalysed popular support for protests against road building, as the site had been the ‘most protected landscape in southern England’ before the incident, having contained two Sites of Special Scientific Interest, two scheduled ancient monuments, seven rare species, and a designated Area of Outstanding Natural Beauty.14 The European Union issued a complaint and several organisations protested at the site including Friends of the Earth, Alarm UK!, EarthFirst! and the Dongas Tribe, a group of ‘new age’ travellers local to the area, who garnered public support in particular after featuring on Channel 4’s Dispatches programme.15 Further road projects that received significant opposition were Newbury road (1994), M77 Glasgow (1994), and the M11 link-road (1995); in the North-East, Newcastle saw one of the earliest protests when the Flowerpot Tribe occupied trees in Jesmond Dene in 1993 to protest the construction of the Cradlewell Bypass.16

    Due to this local and national protest, and to the forecasted explosion in traffic failing to materialise, by 1996 most of the road schemes had been cut. After coming to power in 1997 Labour cut down further, from an initial 500 schemes to 37, with John Prescott promising ‘many more people using public transport and far fewer journeys by car’.17 Rates of increase in car ownership and traffic slowed, and between 1996 and 2006 the road network expanded in total by just 1.6%.18 However, whilst being more gradual, increases over the 1990s and 2000s still led to the point where 2006 was the first year that more households owned 2 cars than none, and indeed Labour embarked on its own programme of road construction following its 2000 10-year transport plan, with £59bn earmarked for new roads.19 In Chapters Three and Four, we will see how this car-oriented governance impacted childhoods even in working class North East neighbourhoods where rates of car ownership fell far below the national average (whereas most British households owned a car by 1981, Newcastle and Gateshead only reached this level in the mid-2000s).20

    Figure I. A DfT graph estimating the rise of traffic in Britain over 70 years, 2019.21

    The motorisation of the British landscape, driven by the policies of planners and experts in government, saw parents grow more wary of letting their children out to play or walk to school or other activities. The term ‘helicopter parent’ had been coined in 1969, but it became commonplace in Britain and America in the late 1980s as more and more parents ferried their children by car.22 As will be explored later in this chapter, much was made of this rise in the driven-child by academics across the period such as with Mead’s ‘Neighbourhoods and human needs’ (1984), Bartlett’s Cities for Children (1999), and Frost’s A History of Children’s Play (2010).23 These contributions developed existing anxieties about road safety, particularly in working-class neighbourhoods which more often had busy roads running through them and relatively little access to alternative outdoor spaces such as parks.24 These developments raised serious concerns about the health and safety of children, and expert discourse began to focus on what could be done to keep children safe. Experts presented a range of views, but generally opinion fell into two camps: The first view, which was the predominant concern expressed by both Conservative and Labour governments, was to accept cars as necessary and instead focus on what children and parents could do to protect themselves from road injury. The second view, promoted by the freedom-and-agency school of academic commentators such as Hillman and Adams (and many others who wrote for the Children’s Environments journal) was that cars were not so necessary, and that work should be undertaken to reduce traffic to make streets safer for children.25

    The first view had been held by governments since the 1960s and expressed itself most publicly through road safety advertising campaigns. These campaigns were usually aimed at children or parents, and commonly featured children as road victims: one of the earliest road Public Information Films (PIFs) was Batman’s Kerb Drill in 1963.26 The 1980 Mark PIF unequivocally told parents to ‘Make sure the under fives stay inside’ and 1987’s Funeral Blues used footage of a real funeral, showing a class of children mourning their dead friend.27 Many campaigns also centred on the danger caused by drink-driving, such as 1983’s Fancy a Jar? Forget the Car and 1995’s One More, Dave. These PIFs placed the onus on drivers to be responsible on the roads rather than parents or children, but by focussing on alcohol they ignored the 75-90% of road fatalities that did not involve drink drivers in the 1980s, 90s, and 2000s.28 More to the point, these films clearly evidence that their creators did not consider road infrastructure itself to be an issue, only the people using it. The 1990s Kathy Can’t Sleep and 2000s Hedgehog Family PIFs – among many others – condemned bad drivers, but still framed the dangers of the road as inherent and thus emphasised the responsibility of the pedestrian to be safe.29 Alongside improvements to the safety features of cars this messaging was effective as child injuries and deaths on roads did decrease during this period despite increased road traffic. Indeed, in the long run the death rate of child-per-vehicle in Britain fell consistently from 1922 to 1992 by over 98%, from c.80 deaths per 100,000 vehicles to c.2.30 In 2000 THINK! was established as a specific government road-safety campaign, and they reported in 2010 that they had reduced road deaths in the decade by a further 46%.31 This did not mean the roads were environmentally safer, of course – although some residential areas did have 20mph speed limits introduced from 1991 onwards – it meant that expectations and behaviours surrounding roads had changed with the rise of a more safety conscious mindset.32

    The expansion of the road network and the emphasis placed on pedestrian responsibility logically led policymakers and urban planners during the 1980s to the idea of segregating road and footpath networks. If cars and pedestrians never came into contact, both would be safer. Since the beginning of the century the car had already been slowly changing what people saw the ‘street’ as being for, as children’s play came to be understood as being in conflict with its function as a transport corridor.33 From the 1960s onwards, guard rails and ‘cattle pen’ road islands became popular in road design as they allowed speed limits to be raised whilst ostensibly keeping pedestrians safe.34 However as Ishaque and Noland point out, this not only cut people off from using streets, but there is also ‘no conclusive research evidence’ on whether guard rails made pedestrians safer overall.35 This is because they irritated people into crossing at unmarked crossing points, allowed for increased speed limits, and gave drivers a false sense of separation, since they were generally not strong enough to stop a speeding car in the event of a collision.36 The remnants of Newcastle council’s plans in the 1960s and 1970s to fully separate people and roads via a system of skywalks can still be seen today, an emblematic relic of a vision that sought to separate people from the streets, and demonstrative of the fact that this design ethos was present in both national Conservative and Labour governments and North East Labour councils.37 While skywalks fell out of fashion in the 1980s, the principle of car-people traffic segregation continued into the 1990s and 2000s. The DfT’s 1995 Design of Pedestrian Crossings paper recommended greater use of guard railings and traffic islands to local councils, along with ‘any other means of deterring pedestrians to prevent indiscriminate crossing of the carriageway’.38 Segregated networks affected children in an especially acute way, because they allowed the conversion of streets (places of mixed-use) into roads (used only for the purpose of transport), which impacted those most profoundly who had used the street most often as a destination rather than a thoroughfare. It is a much riskier proposition to play football on a road, or skywalk, than a street.

    Figure II. One of Newcastle’s last skywalks curving past Manors car park.39

    Throughout the second half of the 20th century children had been losing ground to cars, but at the turn of the millennium there was an effort to reclaim the street as a pedestrian space with the introduction of the ‘Home Zone’ scheme. In part, home zones were based on Dutch ‘Living Street’ schemes, but they were also a resurrection of the 1938 Street Playgrounds Act which had given local authorities the power to close streets ‘to enable them to be used as playgrounds for children’.40 Indeed, playground streets had never technically been abolished, but from the mid-1960s onward they simply started disappearing as councils either removed or stopped enforcing Play Street Orders as cars proliferated.41 The DfT described home zones on their website in 2005 as having ‘children in mind’ and being ‘places for people, not just for traffic’.42 The DfT’s 2001 Home Zone Design Handbook also specifically framed the programme as endeavouring to create ‘streets where children can play safely’.43 Public and local government support for the schemes was strong, so much so that the government launched the £30 million programme whilst the pilot was still ongoing, styling it as a ‘challenge’ where local authorities competed for funding, and ultimately sixty-one projects were undertaken.44 The academic response at the time was unimpressed however, complaining that the schemes were few in number and limited in scope. As Gill argued, ‘few schemes have succeeded in creating spaces between houses that look as if they are genuinely designed for social rather than car use’.45

    Ultimately the weight of expert opinion behind policymaking and urban planning in the 2000s had shifted little from the 1980s design principles that had made vehicles a priority and pedestrian protections a lesser one. In comparison to Labour’s investment of £16.2bn over 10-years in new road construction under the 2000 Strategic Road Network Scheme, the one-off £30m Home Zone Scheme was more of a trial rather than a genuine attempt to reconfigure the character of the British road network.46 In its second term Labour invested significantly more into road projects than public transport and in 2002 quietly shelved its target of cutting congestion by 2010.47 Whilst New Labour had promised a move away from Conservative car-oriented transport policy – and did initially make moves to do so – over time its approach ‘reverted to the mean’, largely due to the scale of the task and Blair himself having ‘little interest in transport’.48

    The Department of the Environment (DoE) stated in its 1990 This Common Inheritance paper on the future of British land management that ‘The Government welcomes the continuing widening of car ownership as an important aspect of personal freedom and choice’. By doing so they failed to recognise that the freedom for drivers limited the freedom of non-drivers, children especially.49 As Hillman and Adams reported at the publishing of This Common Inheritance in 1990, their data showed that ‘only 9%’ of 7-8 year-old children were allowed to go to school on their own, whereas 19 years earlier in 1971 this figure had been 80%.50 Hillman and Adams lamented these ‘restrictions on independent mobility’, framing the issue around parental restrictions, but this trend was as much a direct consequence of government policy as parenting.51 The DfT’s 1990 Children and Roads: A Safer Way plan concluded with the intention to ‘educate parents so that they more fully understand the risks involved and therefore take responsibility for the safety of their children’, continued the characterisation of the issue of road danger around ignorance.52 Children and Roads notably also involved a plan towards lowering speed limits around schools and residential areas, however in implementation most of the scheme’s efforts were spent on encouraging parents to keep their children off the streets, increasing road safety training in schools, and campaigning against drink-driving. The ‘main elements’ of the scheme, as described by minister Christopher Chope, were ‘a television commercial… designed to bring home to parents and to motorists the scale of the problem and 13 million leaflets for parents and drivers giving advice on what they can do to ensure that children are safe on our roads’.53 A similar stance was adopted towards cyclists who, it was suggested, should wear dayglo vests when riding as ‘conspicuity is vital for any cyclist who is concerned about his or her safety’.54 That roads were for cars first and foremost was an assumption that went practically unquestioned, and as cars were so dangerous to children, children needed to kept away from them.

    1.2.2 Places without Play? Making Space for the ‘Normal’ Children

    Children’s physical safety was not the only reason successive governments took a restrictive approach regulating childhood mobility. The practice had a history of being partly an attempt to protect children’s moral health too. Early attempts to remove children from the streets in the 1900s were undertaken in the name of what become known as the ‘child saving movement’. The movement, which first emerged in the US, was initially based around the creation of a juvenile court system but also encompassed more general efforts to combat ‘juvenile delinquency’, more so than stopping kids being run under the wheels of motor cars.55 The development of playgrounds as specific off-street play spaces was closely linked with this movement, which whilst having no specific campaign behind it was mostly centred around Britain’s Societies for the Prevention of Cruelty to Children (SPCCs) that was established as a national charity in 1889.56 The very first such playgrounds in the world were opened in Manchester in 1859, and many more opened in the years afterwards as part of the shift pushing British youth towards more guarded forms of play.57 Octavia Hill, one of the three founders of the National Trust and campaigner for the protection of green spaces like Hampstead Heath, pioneered this type of campaigning in her concern for the poor of London. The playgrounds she created, however, did not provide an equitable alternative to street play. They charged a fee for entry, were supervised by adults, were walled off from the rest of the neighbourhood and were not open all the time. It is, perhaps, little wonder then, that they were often vandalised by local children.58 Because the play space provided was of a single, universal type, it excluded all the children who did not fit the normative idea of ‘the child’ who would be using it, most evidently the children whose families could not pay to access it.

    As Anthony Platt argues in Child Savers, the child-saving movement was largely led by parents of upper and middle-class background and largely directed at working-class parents’ children. Whilst intentions may have been noble and the movement was a force for good in many young people’s lives, at its core it was an imposition of a method of social control on many working-class people: people who may have gained a play park but had lost the streets to the vehicles of the middle and upper classes.59 In 2009 when Platt revisited his 1977 edition of Child Savers, his main point of revision was to emphasise the ‘staying power’ of the 19th century idea of ‘hard-core biological determinism’ in planning and policymaking expert circles.60 By this he meant that, subconsciously or otherwise, the approach to the management of working-class children’s environments in middle and upper class expert circles in the 21st century still followed a ‘social Darwinist ideology’ that sought to reform children by removing them from the streets which, by being the domain of the working class, would corrupt them.61

    The child-saving approach waned in the 1930s and 1940s, but in the 1950s and 1960s it took hold again in a changed form known as ‘child centred’ pedagogy.62 In theory child centredness meant talking to children and basing childcare and education around their needs, but as Tisdall argues in A Progressive Education? the underlying logic of this philosophy was that children were ‘fundamentally separate from adults, distinguished by their developmental immaturity’.63 Everything was to be done for children because children themselves could not be trusted to do things for themselves, in the same vein as the child-saving movement did not trust children to play on the street by themselves. Whilst child-centred pedagogy evolved to be ever more responsive to children throughout this period it never conceded any control to children, only contingent consultation.64 Evidence of this approach in the 1980s can be seen in safety legislation introduced to regulate playground equipment such as the sharpness of edges and the size of gaps between components whilst informal unregulated places of play such as former bombsites and scrublands were increasingly subject to redevelopment, as catalogued by academic works at the time like Robin Moore’s Childhood’s Domain.65 Moore pointed out that the establishment of the Association for Children’s Play and Recreation (ACPR) in 1983, a charity tied to the National Playing Fields Association with the aim of providing play spaces such as adventure playgrounds, was both a response to decreasing outdoor play and a means by which to control it, because recent developments had led Britain to a point where ‘children’s play must be increasingly regarded as a policy imperative’.66

    The aim of the ACPR was to provide places for children to play, and this meant getting them off the road. Mirroring closely early 20th-century ideals, increasingly children playing games in the street were seen as a nuisance or menace, as they might get in the way of moving vehicles, or damage a parked one, as evidenced in the slow but steady un-designation of Play Streets across Britain during this period.67 Teenagers especially were the target for accusations of ‘hanging around’ on streets, although they were also unwelcome on playgrounds, hence the propensity of some to find places abandoned or cut-off to be in, where they could ‘look out and not be seen’, as Patsy Owens’ 1988 interviews documented.68 This fear of the child will be explored in the next section of this chapter, but the point here is that experts in policymaking and planning – by encouraging children to keep out of the street – were following a tradition that cast independent outdoor street play as both physically and morally dangerous.

    This aspect of the universalised approach to childhood environments is especially pertinent to the context of the North East in the 1980s and onwards, as more and more of the region’s characteristic urban play space – the back-lane – became less attractive or off-limits to children, with relatively little land provided as replacement in the form of playgrounds. This was further explained by parallel developments in 1980s British society which saw an increasing emphasis on the importance of individual identity and agency, causing communal spaces such as parks to fall out of favour with parents as play areas compared to individualised spaces such as private gardens and living rooms.69 The government’s encouragement of the creation and commercialisation of semi-public-semi-private areas such as malls and town centres led to a situation where even semi-public, car-free places were unfriendly to the idea of young people ‘hanging around’, pushing children to the fringes. This created a new role for the police in ‘protecting the interests of private business and regulating the activities of the “non-consumer”’.70 In 1989, Nikolas Rose characterised recent developments as a pervasive underlying ‘process of bureaucratisation’.71 Rob White worried that this situation ‘frequently leads to conflict between the police and teenagers over the use of public spaces’.72 Once again, working-class children, less likely to have access to significant private outdoor space and more likely to be in commercial areas as a ‘non-consumer’, were more often impacted by this change in the policing of environments than their wealthier contemporaries.

    Experts in the fields of education and urban planning did not always support this trend, but mostly accepted it as unassailable and instead spent energies on discussing how the ‘playground of tomorrow’ could be made into a safe, fun, and integral part of modern Britain.73 The consequences of focussing on playgrounds over broader play-friendly civic spaces were not lost on designers however, who foresaw that ‘such a setting would not make a good play environment because it would lack many of those elements necessary for meaningful play: variety, complexity, challenge, flexibility, adaptability, etc.’ and that children ‘want to be where it’s at, to see what is going on, to engage with the world beyond’.74 Paul Wilkinson even noted that contemporary playgrounds ‘are not being heavily used because children do not like them; simply put they are neither fun nor challenging. Incidentally, this also gives them the appearance of being safe: few accidents are reported because few children use them’.75 However in the face of a lack of funding and political will, the prospect of changing the entire environment rather than creating better refuges from it – making the world safe for play rather than making a world safe for play – seemed an impossible task and was thus not seriously considered by many.76

    However, less constrained by practicalities and more concerned with possibilities, the freedom-and-agency school of academic commentators (prominent voices like Sheridan Bartlett, Ulrich Beck, Louise Chawla, Mayer Hillman, and Robin Moore) took a very different view, even if ultimately – as we shall see – it was the case that the shift away from street play could not be easily reversed. Generally, they argued that it was not the responsibility of individual children and parents to act safely, but the communities they live in to be safe for them. Criticism of the individualist approach to childcare was sharpened by opposition to Thatcherite ideologies, which were associated with social atomisation and marketisation. This critique is commonly remembered through the issue of the free school milk furore, but the 1980 Education Act liberalised the school services more generally, as did the 1986 Social Security Act, 1988 Local Government Act, and 1988 Education Reform Act.77 Although most associated with Thatcher, New Labour governments also adopted the ‘choice agenda’, as evident with the creation of Academies under the Learning and Skills Act 2000. Tony Blair famously sent his own son to a school that had opted out of local government control under Thatcher’s 1988 act, demonstrating his administration’s endorsement of a marketised, individualised approach to childcare.78

    The argument that academic proponents of the freedom-and-agency school put up against the prevailing neoliberal perspective was that the personal-choice-and-responsibility approach to childcare was ultimately limiting children’s choices about where they could be. As a first example, Mayer Hillman et al.’s 1990 One False Move took a road safety poster to be exemplary of their issue with the government narrative:

    Figure III. A government road safety poster c.1980s/1990s.79

    Hillman et al. were concerned that young people’s freedom of mobility was being eroded, finding that whereas in the 1970s ‘nearly all’ British 9-year-olds were allowed to cross the street independently, now in the 1990s only half were.80 They contested the DfT’s claim in Children and Roads as part of the 1990 Safety on the Move campaignthat ‘Over the last quarter of a century, Britain’s roads have become much safer’ and words of the Association of Chief Police Officers which stated that Britain was ‘the safest country… in Europe’ regarding its roads.81 Accidents and deaths may be down, Hillman et al. argued, but this did not mean the roads were safer; they contended the statistics indicated that the roads were considered more dangerous than ever, and thus avoided.82 Exceptionally for the time, One False Move attributed reduced rates of child road accidents to a loss of childhood freedoms, saying that ‘the accident statistics are reconciled by the loss of children’s freedom… it is the response to [the] danger, by both children and their parents, that has contained the road accident death rate’.83

    The founding of the Children’s Environments Quarterly journal in 1984 manifested the growth of academic interest in these issues. Based in the US, but with many British and international contributors, the journal was designed to be an interdisciplinary ‘low-cost, highly graphic alternative to more conventional journals, without the detached formality that many were finding troubling in “serious” academic publications’.84 Despite being told the idea was not economically viable by publishers, interest was strong enough to keep the journal running, and it served as a collector of academic work that challenged urban design trends of the day.85 The transatlantic nature of the journal reflected a transatlantic interest in children’s environments, with American academics sharing many of the same concerns as British ones. Indeed, then as now, British academics in this field relied upon and were entangled with work coming out of the US. For example the essay ‘Neighbourhoods and Human Needs’in the opening volume of the journal came from the influential American anthropologist Margaret Mead, but was obviously also applicable to the British context:

    When children move into a newly built housing estate that is inadequately protected from automobiles, parents may be so frightened… that they give the children no freedom of movement at all.86

    Those writing for the Children’s Environments journal turned the scope against experts in positions of power, questioning their methods and philosophies. Colin Ward’s popular 1978 text on urbanism The Child in the City described the myriad ways ‘a significant proportion of the city’s children have come to be at war with their environment’, and found city planners to hold simplistic notions of children characterized by a concept of a ‘universal child’ which excluded lower-income, non-white, and female childhoods that typically had less access to the cars and technologies that facilitated their vision of late 20th century life.87 Ward’s text would go on to influence many others, including Claire Freeman’s 1995 Planning and Play, which examined British planning literature of the period and lamented the ‘lack of recognition given to children’s needs’ as ‘clearly evident in the almost total omission of any discussion of children in mainstream planning literature’.88 In a later study (2005), Freeman questioned urban planners on their methods and found that children were considered only in the planning of ‘recreation spaces’, and ignored in the planning of streets, houses, shops, leisure facilities, and infrastructure.89 This is demonstrative of the fact that whilst many scholars had been denouncing planning methods for the past 3 decades, little had changed in response to their calls for action.

    Whilst the general argumentative thrust of the freedom-and-agency school of academic work did not much change across these decades, the methods of argument did. In the 1970s, academic literature of this type tended to focus specifically on the benefits the natural world had on children with psychological differences, rather than children more generally. Kaplan’s 1977 Patterns of Environmental Preference found that suburban-child participants with diagnosed Attention Deficit Disorder (ADD) reported beneficial outcomes on their mental health up to several years after being sent on an extended nature-camp expedition.90 Similarly Behar and Stevens’ 1978 Wilderness Camping placed American city children with Attention Deficit Hyperactivity Disorder (ADHD) on a ‘residential treatment programme’ centred around outdoor activities, and found that the majority of their subjects demonstrated ‘improved interpersonal skills and school performance’ after the activity.91 During the 1980s and 1990s, this approach began to change so that it was most common for studies to consider children in general as being under threat from reduced access to outdoor space, particularly natural outdoor space, such as with Boyden and Holden’s 1991 Children of the Cities.92 Media in both Britain and the US picked up on this transatlantic concern during the 1980s and 1990s and indeed was ahead of experts in expressing alarm about the role new technologies were playing in children’s lives – as will be explored in Chapter Two. A 1997 article in Time claimed that a chronic lack of play and physical touch during childhood due to too much time spent indoors could result in developing a brain ‘20 percent to 30 percent smaller than normal’, which whilst being wrong, demonstrates the acute fears of the period, and that academics were far from alone in their concerns.93

    Judy Wajcman’s 1991 Feminism Confronts Technology is exemplary of the parallel growing academic interest in past struggles over urban environments. Wajcman used an assessment of the ‘play streets’ movement of the 20th century as a lens through which to view contemporary debates over similar issues, the movement being an example of working-class people, predominantly women, creating an alternative vision of how childhood environments could be managed. Beginning in the 1930s, Play Streets sought to stop the frequency with which middle-class drivers, or delivery drivers working for business-class bosses, were running over working-class children.94 Wacjman argued that increased traffic was a major factor in the decline of working-class street sociability because adults were no longer required to be out on the streets to watch over their and others’ children.95 As formerly unassuming activities such as playing football or tag became acts of delinquency or hooliganism, working-class people – women and children in particular – were ‘literally left stranded in… cities designed around the motor car’.96 Katrina Navickas describes this as a lost form of ‘commoning’ (a process that generates relationships) in a forthcoming book.97 Wacjman framed play streets as a form of counter-cultural resistance, one that ‘started from the assumption that city children had the right to play in the streets where they lived, and that cars, not children, were the main problem’.98 Indeed, it was common for academics to invoke a form of ‘nostalgic progressivism’ that used memories of the past to argue for a radical shift in policy. Conversely, government experts offered a kind of ‘futurist conservatism’.

    Although practical experts and theoretical and historical experts had their differences over urban design, both tended to overlook dangers that children faced in environments aside from the street. For example, the AA motoring trust’s 2003 report Accidents and Children found that the deaths of children as passengers in cars was considerable and overlooked; for young children in particular the risk of being killed in a car was greater than for being hit by one.99 There was no wide debate about whether children should or should not be driven. The National Children’s Bureau also found that in the 2000s three times as many children were taken to hospital each year for falling out of bed than from falling out of trees.100 The National Trust’s 2012 Natural Childhood report found that during the 2000s one million children aged 14 or under went into A&E departments from home injuries: ‘30,000 with symptoms of poisoning, mostly from domestic cleaning products, and 50,000 with burns or scalds’.101 Additionally, it found that 500,000 infants and toddlers each year were injured in the home, 35,000 from falling down stairs, and that almost half of all fatal accidents to children were caused by house fires.102 In terms of sheer numbers, this meant that the home for a child was by far the most dangerous place to be. While cars had made the streets unsafe, the home was not the haven it was perceived to be, and the dangers it posed were, in general, far more serious than the injuries a child could sustain from outdoor play. On a much bigger scale, in relative terms the dangers of cars, strangers, and natural spaces were nowhere near as important in determining children’s lifespans as those of poverty and inequality between children.103 All this to say that the danger of the outdoors was real, but it was also specifically focussed on as a danger to children in a way that the dangers of being in the home or being a car passenger were not.

    The debate between the health-and-safety approach of policymakers and planners and the freedom-and-agency approach of many academics defined much of how the physical environments that children inhabited came to change and be understood across these decades, especially in response to the rapid growth in car ownership and campaigns to take children off the streets. This debate was not an equitable one, however. The work of Chawla and others did not materialise in any extensive physical changes to the landscape; New Labour’s experimental and limited home zone scheme being the largest attempt to rebalance streetscapes.104The process of further restrictions being placed on children’s mobility continued, with the burden falling especially on those that did not fit the mould of the universal child. Thus a health-and-safety approach brought reductions in the spaces available to children who could not easily access a park, garden, sports centre, National Trust property, or some other outdoor space.

    1.3 Safeguarding, Crime, and Children: Challenging Traditional Expertise

    Girls in particular were said to be threatened by one of the most enduring dangers of late-20th and early 21st Britain; not the motor car but ‘The Stranger’, an ideathat captured the public imagination. Promoted by parents, newspapers, charities, and indeed experts in government, a national ‘stranger-danger’ discourse arose which asked: ‘what can be done to protect our children?’. This section will explore the impact of this popular and media discourse on experts, and how – once again – policymakers and academics differed in their response to the rise of this new threat, whilst also grappling with a movement that challenged their traditional authority. I will also explain how expert discourse that conceived of child safeguarding as a societal issue rather than individual one both clashed with the dominant individualist culture of the period and perpetuated solutions based on the idea of a universal child. Safeguarding solutions, whilst partially valid, inevitably led to the undervaluing of issues with children who did not fit the normative model.

    The strangerprovoked a response in the public that the motor car did not, even though the latter was evidently more deadly. Why? First, strangers posed an intentional threat rather than accidental, making the danger more malicious. Second, the inherent humanness of stranger-danger made it feel more personal and immediately understandable as compared to the more complex system of factors that constituted car danger. Finally, the unknowability of the strangergave the idea power. Its theoretical, semi-fictional quality gave it an air of mystery so often used in fiction to create atmospheres of anxiety, suspense, or horror – feeding people’s fears of a threat that they knew was out there but could not see. Indeed, whilst abductions, abuses, and murders by strangers did pose a threat to children, the specific idea of the stranger that emerged during this period arose largely on the back of what Jennifer Crane calls a ‘sensationalist’ media narrative that began with the Moors murders in 1966 and entered its heyday in the 1980s and 1990s.105 The construction of the idea of strangers during this period – which contributed to parents being more restrictive over their children’s mobility – overstated the dangers and underplayed the benefits of allowing children independent outdoor play.106 Additionally, particularly following James Bulger’s murder in 1993, a second construction arose in public, media, and expert discourse which argued children themselves (teenagers especially) were something to be fearful of. Like the idea of ‘The Stranger’, the idea of ‘The Youth’was representative of a broad fear: that the younger generation had lost discipline, leading them to become antisocial and dangerous menaces to society. As with cars, experts were divided into two camps on the issue. In general government experts of the health-and-safety school endorsed stricter control of children’s independent mobility to protect them from strangers and to protect strangers from them. Meanwhile, academics of the freedom-and-agency school supported less control over children’s mobility. For example, in 1992 historian Philip Jenkins called the recent focus on stranger-danger a baseless effort to ‘induce fear and moral panic’ from politicians and the press.107

    The stranger narrative of the 1980s had its roots in the high-profile reporting of child abuse (also called child maltreatment) cases from 1960s onwards, most notably the 1966 Moors murders of five children. The fact that one of those convicted in that case, Myra Hindley, was a woman gave it particular traction in the press, with Hindley earning the tagline ‘the most evil woman in Britain’.108 Later investigations into possible other victims and Myra’s repeated appeals for release from prison in subsequent years kept the case alive in public consciousness. The nature of the media coverage into the Moors murders and subsequent similar cases was twofold. First, its aim was to covey the horror of the story to the public and condemn the criminals, but simultaneously it was to critique the organisations which had failed to prevent the crimes occurring. Indeed, traditional experts such as social workers, police, doctors, psychiatrists, government officials, and teachers were often heavily criticised in the press for their failures in cases of child abuse and murder, particularly if the perpetrator was related to the child.109

    In the infamous 1973 case of Maria Colwell, where the seven-year-old was abused and murdered by her stepfather, much of the reporting, and the ‘primary focus’ of the subsequent expert-led public inquiry, was into the failings leading up to Maria’s death of institutions such as social services, the NSPCC, and the health service.110 Criticisms centred on poor communication between services, a general lack of competence, and institutional intransigence. Maria Colwell was by no means a one-off; the media responded analogously throughout the 1970s, 1980s, and 1990s to similar cases including a spate of three in 1984: the murders of Jasmine Beckford, Tyra Henry, and Heidi Koseda, all young girls killed by their father or stepfather. Black children and girls especially were at disproportionate risk of such a danger, including Jasmine, Tyra, and, 16 years later, Victoria Climbié, whose death kickstarted several pieces of child-protection legislation under Tony Blair. In the report into the case of Tyra Henry, white social workers were found to have failed to intervene despite being aware of domestic abuse because they ‘lacked the confidence to challenge the family because they were black’.111 Once again, the experts had failed, and media coverage encouraged the public to take notice.112 Stephen Bubb, the leader of Lambeth Council where Tyra had lived, called for an end to a ‘trial in the press of the social workers’.113 Margaret Thatcher’s government responded to the 1984 cases with new guidelines for social workers on how to handle child abuse cases and by passing the 1984 Child Abduction Act, which more explicitly recognised the rights of the child than the old 1861 Offences Against a Person Act, created separate categories of crime for ‘Abduction by a Parent’ and ‘Abduction by Other Persons’.114

    In the Noth East the Cleveland scandal of 1987/1988, which a local MP called ‘the greatest child abuse crisis that Britian has ever faced’ also spawned a national media storm surrounding social workers.115 121 children were taken away from their parents in the borough under accusations of sexual abuse, only for 94 to be returned after being determined to have been ‘incorrectly diagnosed’.116 Some scholars have since argued that many of the original diagnoses were in fact correct, and that government officials suppressed evidence supporting the diagnoses because acknowledging the scale of abuse would have required significant new resources.117 Either way at the time and in popular memory the Cleveland case became a totemic example of expert overreach, and is popularly credited with part-inspiring the 1989 Children Act,which shifted the focus of responsibility for children away from the state and towards individual families.118

    By the mid-to-late 1980s, following two decades of increasingly publicised cases of child abuse, the issue of sexual abuse in particular entered expert discourse as a significant political talking point and agenda. Moreover, the growing recognition of the idea of ‘Battered Child Syndrome’ amongst medical professionals after 1962, following an article of the same name in the Journal of the American Medical Association, gave legitimacy among experts to the problems of child protection and abuse.119 The overall conception of abuse was moving away from being understood as a medico-social problem of mental or physical health and toward a socio-legal problem with multitudinous societal influences and effects. This, together with the increased interest from the public and media in cases of child abuse, represented an experiential and emotional turn in the study of child abuse; that being a greater value placed on ‘normal’ people’s views over the views of experts. The Childline charity, established in 1986, was founded on this principle. Esther Rantzen, Childline’s founder, described the organisation’s concept of child abuse as ‘incorporating sexual abuse, but moving beyond it to encompass physical and emotional abuse, and neglect’, something she criticised experts of the period for ignoring.120 Thus, experts found themselves asking: how are we to address this ‘new’ Battered Child condition, for which our traditional methods are inadequate?

    The direction of travel of expert analysis on this issue from the 1970s through into the 1990s involved turning away from thinking about child protection in the (now old-fashioned) paternalistic sense, wherein those in authority assumed they knew what was best for children. Instead, Tisdall explains, it became accepted amongst theorists that parents or even children themselves could be treated as sources of expertise on such issues, and this widened the discursive space surrounding what forces in society contributed to environments in which cases of abuse occurred, and who was best-placed to understand those forces.121 When child abuse had been seen as primarily a medical issue in the first half of the 20th century, assessment of it had been taken at the level of the individual, asking: ‘what is wrong with this person?’, when considering either the abused or the abusers. Now child abuse was understood as a social issue with medical consequences, the question had become ‘what is wrong with society?’. The older individualistic medical approach was deeply flawed in its inability to address patterns and trends of abuse, but the social approach also had its problems. As Crane argues, the impact of this new expert discourse was to bring about the emergence of ‘a universalist model of childhood vulnerability, characterised around an ageless, classless, genderless “child”’.122

    This was connected to the ‘child-centred’ pedagogy of the era which, as discussed in the Urban Landscapes section, was an approach to teaching and parenting that ostensibly put children at the centre of its philosophy, but was mostly interested in fitting ‘incomplete and incapable’ young people into a particular universal societal mould.123 Child-centredness influenced many aspects of children’s lives from the way school buildings were designed to the way social services operated, because, as Roy Kozlovsky explains, the idea of catering to ‘child’ as supposed to ‘children’ led to certain groups being excluded from supposedly inclusive environments.124 For children with disabilities, for example, the poor accessibility of school buildings built during the 1970s and 1980s mirrored the ill-provision of their education, which by treating all children as one actually furthered certain inequalities between them.125 Furthermore, because the child-centred approach to child protection was far more attentive to identifying and addressing threats to children from outside the universal model rather than inside, it disproportionately focussed on the dangers of strangers. People known to children – parents, teachers, and peers – were inside the model, and so even though the majority of child maltreatment cases (sexual abuse cases especially) were and are perpetrated by people already known to the child, considerably less emphasis was placed on the dangers those ‘internal’ threats posed.126 Waters highlights how this institutional suspicion of the unfamiliar also fed into other societal prejudices, notably that of race.127

    Policymakers under Thatcher governments endorsed this universalised ideal of childhood as evidenced in the acts they introduced to centralise control over children’s lives at home and school. There were obvious efforts such as the imposed ‘prohibition on promoting homosexuality’ placed on schools and local authorities under the Local Government Act 1988, but there were also a number of acts that enforced family uniformity by more indirect means. To the Thatcher administrations of the 1980s, home and family meant – even symbolised – safety and normality, and the way they approached the legislation of childhood reflected that. The 1980 Child Care Act,for example, focussed on keeping families together by encouraging councils to work with private organisations to ‘diminish the need to receive children into or keep them in care… or to bring children before a juvenile court’.128 John Major’s government continued this approach with the 1991 Child Support Act,and consequent establishment of the Child Support Agency in 1993, which required the tracking down of ‘absent’ parents (fathers primarily) to get them to pay child support instead of the government; the intention thereby being to make sure parents met their legal obligations and discourage family breakups.129

    The 1989 Children Act is the crucial piece of legislation to examine on this issue, as it introduced the most significant changes to encourage the Conservative model of family life. Somewhat following ideas of the new ‘child-centred’ approach to childcare the Act specified that local authorities should give ‘due consideration’ to children’s wishes about where they wanted to live, but that ultimately parents had total authority on the matter:

    ‘Any person who has parental responsibility for a child may at any time remove the child from accommodation provided by or on behalf of the local authority’

    – Children Act 1989, Part 3, Section 20 (8).130

    The law stipulated that only unmarried fathers could lose parental responsibility (PR); mothers and married fathers could only ever have PR restricted in rare and severe cases.131 This, along with the rule that unmarried fathers did not automatically have PR, meant the act tacitly endorsed a ‘traditional’ nuclear-family structure.132 Even in cases where it was deemed that a child should be taken away from their parents, the act still required that they be housed as close to their parents’ home as possible, and that they keep the family name.133 Furthermore, in part 5 (‘The Protection of Children’), the act firmly established child abuse as a legal issue first and social/medical issue second, which – after implementation – led to a greater reliance on hard to gather forensic evidence to convict in such cases, leaving children stuck in ‘forensic limbo’ as cases drew out longer, and fewer were processed overall.134

    Figure IV. ‘Estimate average duration of care proceedings across all courts’.135

    The focus of the Children Act was thus on addressing exceptionally horrific newsworthy individual cases of child abuse as opposed to the broader pervasive issue of child abuse, an approach which was supported by the press’ own fascination with such cases. This type of legislation which assumed a singular preferred family model lost sight of the specificity by which any one child’s life differs from another’s under the same societal forces, altered by crucial variables including race, class, gender, and environment. The Children Act’s insistence on the home and family being thepreferred safe space for children, for example, and that authorities should only intervene if a child is ‘beyond parental control’, failed to consider the increased risk of sexual abuse that girls faced in the home, particularly from father-figures, and the protection that being outdoors with other children could offer from such threats.136

    To say that the child-protection agenda of experts in government and social services during the 1980s and 1990s was totally based around efforts to encourage ‘family values’ however, would be untrue. As part of the experiential and emotional turn of the era and the Thatcherite distrust of the civil service, policymakers were also keen to consult about new approaches to the management of the systems of child protection with non-traditional sources such as feminist critics, charities, and public campaign groups who ‘spoke for children’.137 This led to a considerable degree of independence being given to small-scale voluntary-sector groups – at the expense of government and social services – to run often quite radical programmes of education and activity.138 For example, in 1986 the Central Office of Information (COI) hired one such small charity, Kidscape, to create official child-safety public information films (PIFs) on their behalf, because they had assessed their own Charley PIFs to have been ineffective.139 More broadly, this approach devolved the responsibility for child-support programmes to local authorities and charities, which allowed certain groups to pursue new approaches to child protection, but it also meant less regulation and uniformity in the support available to children. The largest of these groups founded in the 1980s and 1990s, such as Kidscape, Kids Company, Childline, Children in Need, KidsOut, and the WAVE Trust were based in London, and as such their services were harder to learn about and access for children in the North East.140 Groups in the North East, like the Gateshead Young Women’s Outreach Project (GYWOP), which drew on the experiential and emotional expertise of the children they worked with to discuss issues of contraception and sexual abuse, were smaller and had less influence over experts in government.141

    Figure V. Scene from stranger-danger film ‘Adult & Child’ (1994).142

    The growing significance of charities marked an important shift in the government’s concept about who was considered an authority on issues of child protection, as now parents and even children themselves were being consulted as experts, not directly by policymakers, but by the private organisations they worked with. This approach complemented Thatcher’s distrust of the public sector and drive to focus on new alternative sources of expertise such as those in private charities.143 ChildLine is one prominent example, the charity coming out of the success of the BBC show Childwatch at the insistence of its producer Esther Rantzen.144 Many traditional experts such as those at the NSPCC and National Children’s Home (NCH), whilst supportive of the effort, expressed doubts as to its longevity because it was run by journalists and inexperienced volunteers, but it proved extremely popular.145 In a retrospective seminar in 2016, the MP Shaun Woodward said of Childline:

    Thirty years ago we didn’t talk about child abuse. Child abuse was something that most people thought happened in extreme cases in places that had nothing to do with them… What Esther brought to it was her journalism and what she found was that there were these kids who for whatever reason weren’t being picked up by the NSPCC, weren’t being picked up by the statutory services.146

    Interestingly though, whilst policymakers in the 1970s and 1980s were often keen to consider popular sentiment and consult with non-traditional experts, academics were not so quick.147 The trust in experiential and emotional expertise that organisations like Childline represented, in some respects undermined the value of traditional experts, challenging their authority. Until the 1990s the academic response to the topic of stranger-danger and child protection was notably muted, especially when compared to the literature about the threats that urbanism posed to childhood, for example. Why was this? Experts often failed to grapple effectively with these emotionally charged public debates because they were unfamiliar with them. Debating the dangers of cars and urbanism was known territory for many, as this involved relatively formalised and detached discussions within expert circles. In the realm of strangers, however, the prevalent discourse was non-traditional: it was passionate, experiential, and led by journalists and public campaign groups who were often distrustful of established sources of authority. For example, Colin Ward’s The Child in the City (1978), Robin Moore’s Childhood’s Domain (1986), and Neil Postman’s The Disappearance of Childhood (1992) were three influential texts from across the era which lamented the loss of childhood freedoms – and have been subsequently frequently referenced in academic work – but did not address the topic of strangers or child abuse.148

    The arguments these academics made about ‘lost childhoods’ may be seen as largely legitimate and, indeed, the work of this thesis supports many of their ideas, but because they were written in response to what they saw as an emerging threat to childhood liberties, they did not engage with the stranger-danger narrative as they did the urbanism narrative. A scepticism about the protection debate led those scholars who did write about child murder cases to talk less about issues of child protection, and more about policy responses to issues of child protection. For example, Nigel Parton’s 1986 analysis of the official report into Jasmine Beckford’s murder found its conclusions to be ‘very much open to doubt’ and ‘misdirecting our attentions from the major issues’ because of its tight focus only on issues within the Beckford family, classifying them as the problem, and not thinking more broadly about societal forces acting upon the family.149 Reading the report, its focus on ‘high risk’ cases does distract it from addressing how child abuse could be better prevented generally, but similarly Parton’s critique can also be judged as paying too little attention to the importance of these high-profile cases. Much academic discourse in the field was preoccupied with structural analyses, and critiqued ‘experientialist’ approaches taken by policymakers in conjunction with the media as being too often anecdotal, sensationalist, and lacking a serious methodology. However, the value in ‘experientialism’ came to be more recognised in the 2000s.

    An emphasis on experience and emotion initially acted to exclude traditional experts but by the mid-1990s more and more academic work started to address the stranger-issue and, indeed, to engage with experiential and emotional sources of expertise.150 In an article for Children’s Environments in 1994, one of the journal’s first to assess the topic of strangers, the researchers interviewed parents about their fears for their children and found that ‘for most parents the fear of random physical assault by a stranger superseded all other fears of violation or harm’.151 The researchers also concluded that ‘Parents commonly fail to recognize that children’s safety is an illusion’- meaning that danger was an inherent and in some ways essential part of childhood, and this quote is exemplary of the broader academic approach to the stranger.152 Whilst not dismissive of public and media concerns about strangers, many academic contributors argued the issue had been overblown by newspapers and that the measures parents and policy-makers were taking to combat stranger-danger were disproportionately restrictive. Pain’s ‘Paranoid Parenting?’(2006) described ‘risk-averse’ parents as ‘cosseting their children indoors’.153 Katz in Power, Space and Terror (2006) made the point that street crime had been falling since the 1970s and 1980s, meaning that – in terms of crime – parents generally played in more dangerous streets than those they denied their children.154 Handy et al.’s 2008 ‘Neighbourhood Design and Children’s Outdoor Play’ similarly emphasised that it was the ‘parental perception of neighbourhood safety’, rather than actual safety, that was the significant restrictor of child mobility.155

    This tendency toward disagreement in expert discourse over child protection between academics and policymakers was expedited by the rise of a new fear following the murder of James Bulger in 1993. James Bulger’s case was as widely publicised as those of Maria Colwell or Jasmine Beckford, but what made it particularly notable was that the killers were children themselves. Two ten-year-old boys who led James away from his mother in a busy shopping centre and were caught on CCTV doing so. The evocative image of the toddler been led away spread widely, and the event sparked much discourse surrounding ‘the state of the youth’ in modern Britain.156 In the North East, response to the Bulger case was shaped by the memory of 11 year old Mary Bell, who in 1968 had murdered two young boys by strangulation on Tyneside, and loomed large in popular memory. This was so much the case that soon after the story of James Bulger broke, reporters tracked down the now 41-year-old Bell, who had assumed a new identity, and consequently was forced to do so again after members of the public caught wind of her address and threatened assault.157

    The media and public concern that arose following the Bulger case in particular led to a notable change in policy approach from the government, with both John Major and Tony Blair promising to ‘crack down’ on child-crime. In the 1980s the Thatcher governments had been comparatively lenient towards youth crime with the 1982 Criminal Justice Act significantly reducing the imprisonment of under 21s and limiting the use and length of custody in young offender institutions, subsequently leading to reduced crime rates and prison populations for young people.158 In the 1990s however Major and Blair’s governments took a much harder – and electorally popular – stance toward youth crime. Major’s notorious 1993 ‘Back to Basics’ speech summed up the approach succinctly in the insistence that society should ‘condemn a little more and understand a little less’.159 One month later the home secretary Michael Howard increased the maximum sentence of detention for 15-17-year-olds and the 1993 Criminal Justice Act introduced Secure Training Centres (STCs), privately-run facilities in the style of US ‘boot camps’.160 Despite Blair calling the proposal a ‘sham’ in 1994, Labour would go on to further invest in the programme once in power.161 On the announcement, Howard stated that child offenders ‘are adult in everything except years’ and between 1993 and 1998 the number of imprisoned teenagers doubled.162 Similarly, the 1994 Criminal Justice and Public Order Act and 1997 Confiscation of Alcohol Act gave the police greater powers to break-up and move along groups of young people on the street.163

    Under Tony Blair’s premiership this basic approach changed little, first introducing the 1998 Crime and Disorder Act which abolished the principle that children under 7 were ‘doli incapax’ (incapable of committing a crime) and creating a system of Anti-Social Behaviour Orders (ASBOs), which could be used in any event where a child behaved ‘in a manner that caused or was likely to cause harassment, alarm or distress to one or more persons’.164 Under the same legislation ‘parenting orders’ were introduced, which legally required parents of children with ASBOs to impose curfews and attend parenting classes.165 The 2003 Anti-Social Behaviour Act strengthened the ASBO system, giving police the power to disperse groups of 2 or more children in any public place if their presence ‘has resulted, or is likely to result, in any members of the public being intimidated, harassed, alarmed or distressed’.166 This was strong policy that matched Blair’s ‘tough on crime, tough on the causes of crime’ slogan, paired with an emphasis on personal responsibility.

    This philosophy enabled an approach towards crime that not only focussed more on punishment of individuals but, particularly in the case of children, distanced government from responsibility towards them. Major and Blair’s more punitive policy platforms increased the level of state intervention but decreased the level of care the state was expected to provide or take responsibility for. For example, the 2007 paper The Children’s Plan: Building Brighter Futures placed emphasis on children specifically as being outside of the government’s remit. In language reminiscent of the Thatcher administrations, the stated first principle of the paper was that ‘Government does not bring up children – parents do’.167 What differentiated Blairite youth-justice from Thatcherite youth justice however, and what meant Blair brought far more children into contact with the youth justice system, was the belief that young people had become dangers to society, not only that society was a danger to them. This can be seen in how the term ‘child’ was often withheld from child-criminals, instead being referred to as youths, yobs, teens, young offenders, delinquents, or any number of other terms. Emblematic of this was the Secretary of State for Justice Jack Straw’s comments in 2008, when questioned on Britain having ‘more young people in custody than any other comparable country in Europe’, that:

    Most young people who are put into custody are aged 16 and 17 – they are not children; they are often large, unpleasant thugs, and they are frightening to the public.168

    This categorisation of young offenders as ‘not children’ is indicative of the perception held of them in policymaking circles during the 1990s and 2000s, and of the STC and ASBO systems set up to deal with them. In some respects, this child-danger discourse was a response to the 1980s stranger-danger narrative construction of the child as being pure and powerless.169 Children are not so innocent, the argument went, they are not always the ones to be fearful for, but to be fearful of. The Bulger case was only an extreme symptom of a wider problem. This was not only the view in expert circles, and indeed the strong public and media response to the Bulger case galvanised government action (this will be explored further in Chapter Two). To give an understanding of the nature of the response, the Daily Star’s headline after Bulger’s killers were convicted read: ‘How do you feel now, you little bastards?’.170

    1.4 Technology: Exploring an Unknown Environment

    The role that technologies played in the trend toward indoor childhoods from the 1980s onwards was different to that of urban landscapes or strangers. Cars and ‘creeps’ kept children inside through fear, creating an atmosphere where many parents decided it was too dangerous to let their children out of sight. Technology, on the other hand, was something about the indoors that was attractive to children, something to make them choose it over outside play, or indeed just something to do when they weren’t allowed out.

    By 1980 97% of UK households had a TV and after the first home computers launched in 1977 they too became commonplace, first in schools during the 1980s, and then in homes during the 1990s and 2000s.171 The children of this generation were thus the first to grow up with these technologies meaning that, as Peter Buchner observed in 1995, many parents’ frames of reference for childhood had become ‘invalidated’.172 The World Wide Web launched in Britain in 1991 and as household adoption grew (from 9% in 1998 to 73% in 2010) the ‘digital world’ grew as an entirely new environment of childhood that young people often understood better than adults.173 This was not the case for all children however. Just as the dangers of cars or strangers disproportionately affected BAME kids, working-class kids, and girls, access to many of the benefits of technology was more difficult for them.

    As Helsper and Livingstone contend, whilst most children had access to new technology, disparity lay in the fact that the white children, middle-class children, and boys were given a better-quality technical education, allowing them the skills to make the most of technological opportunity.174 The GCSE for Information Technology (started in 1986) is a good example of this, as it had consistently low take-up rates with female, black, and Free School Meal students throughout the 1990s and 2000s.175 This meant, I argue, that the pervasive concept of the universal child that had for so long gone unnoticed in expert circles started to adopt a new characteristic – that the universal child was technologically literate. Digital devices, conceived of as universal tools, fell prey to the same notions of the universal child which meant little provision was given for accommodating differences.

    Technology was not seen only as a source of opportunity however, as during the 1980s and onwards media and public fears grew significantly over the negative impacts that TVs, games consoles, and computers might have on childhood. Technology’s effect on addiction, obesity, anxiety, bullying, social exclusion, and antisocial attitudes were all talking points. Perhaps the ‘white heat of technology’ was too hot for kids to handle? Different to fears over cars and strangers though, this fear emphasised the dangers of the indoors over the outdoors. The theme of the BBC show Why Don’t You? (which ran from 1973 and 1995) asked why children did not ‘just switch off your television set and go do something less boring instead’.176 The irony that this was a tv programme is self-evident, and not lost on children at the time,177 but the greater irony is that after 1988 the show mostly dropped its central message to morph into a more standard drama, resulting in a threefold increase in viewing numbers from 0.9 to 3 million per series.178 Public and media concern (which will be examined in greater detail in Chapter Two) over the issue of ‘square-eyed’ children was common, but these concerns ultimately had little impact on the technologisation of youth, and the fact that parents were choosing to keep their kids at home more than ever in the face of cars and strangers only accelerated the process.

    Expert discourse, both in academia and government, took a different path to the popular. As part of efforts to expedite Britain’s continuing post-war transition away from a manufacturing to a services economy, policymakers under Thatcher, Major, and Blair encouraged a general development toward, adoption of, and education in new technologies.179 Indeed, the computer was at the heart of this effort, and the perception that children needed education in digital literacy was strong, with Thatcher’s government promising to ‘put a microcomputer into every secondary school in the country by the end of 1982’ because, in her own words:

    We must remember that today’s school children will still be working in the year 2030… My generation has perhaps been too cautious about accepting new technology in micros. Younger people are quick to use new things and have an aptitude for them… familiarity with keyboards and TV screens will help them to take in their stride the new technologies on the shop floor, in the office and in the home.180

    Similarly, Blair’s commitment to ‘education, education, education’ in large part was underpinned by an understanding that ‘the age of achievement will be built on new technology’, and a promise to connect all schools, colleges, and universities to the ‘information superhighway’.181 In regard to TV, Thatcher described it as ‘one of the great growth industries, creating jobs, entertainment, inspiration and interests’.182 It is not unsurprising, then, that whilst in government neither took steps to significantly restrict or regulate children’s TV or other media. The closest thing to this was the founding of Ofcom in 2003 under the Communications Act which formalised the requirement to consider ‘the vulnerability of children’ when deciding the media they could be shown.183 Ofcom’s establishment did not respond to contemporary fears over the dominance of TV in children’s lives however, its prerogative being to strictly concern itself with the content of children’s media, as it was founded off the back of public fears over children’s exposure to violence and pornography.184 In other words, regulators were not concerned about how much TV children were watching, only that they weren’t watching ‘the wrong things’.

    During the 1990s and 2000s, in response to reductions in unstructured outdoor play time many more families started to send their children to pre-booked sessions for sports, hobbies, and lessons, which often came with a price tag.185 This commercialisation of play meant that opportunities for free play, in both senses of the word, were further reduced and considered lower status than those with an associated cost, as the price connoted quality.186 This disadvantaged families without the money or time to take their children to such sessions, pushing them by necessity towards indoor play and technologies like the TV. A 2001 study for British Telecom (BT) which found that working-class children were significantly more likely to have a TV, games console, or video recorder in their room than middle-class children was indicative of this fact and showed how the use of technology was linked to a lack of access to outdoor environments.187 It was also the case that middle-class parents were generally more receptive to arguments about the dangers of the TV than both working and upper-class parents, and therefore stricter over their children’s access to it.188

    Concerns over childhood obesity linked to technology were also rising, but policymakers generally framed this as a parenting problem rather than a government one. On the release of the 2004 health white paper the Department of Health stated that ‘parents know that their children’s health is primarily their responsibility’; similarly, the Children’s Commissioner for England in 2006 argued that ‘[parental] education should start in ante-natal classes’ on how to manage their children’s relationship with technology.189 This expanded on the remit of the universal child. With regard to use of technology, children were assumed to be healthy and have a healthy relationship to it, but this was the preserve of those children who had families who could facilitate outdoor play and exercise as alternatives. Once again this meant middle class white boys most benefitted.

    The overarching expert discourse in government over children and technology, then, was one that concentrated on positives at a policy level and delegated management of negatives to the family level. Attempts to restrict children’s access to technology would have been seen as draconian and limiting towards their future prospects; furthermore, dealing with the issues that had driven most children indoors in the first place (cars and strangers) would have been both an enormous undertaking and one experts in government would not have endorsed, as demonstrated earlier in this chapter. Public and media discourse stood in stark contrast to this approach, often viewing technology with far greater suspicion. The health consequences for children of increasingly indoor lifestyles were obvious to all parties, but they did not agree on the cause or the solution. The policymakers’ view was that education for parents and children was the answer, whereas the popular view was that the technology itself was the problem. Both views denied any agency to children themselves, assuming them too naïve to do what was best for themselves, when in reality most children weren’t given much of a choice.

    Figure VI. A satirical cartoon illustrating parental fears (2007).190

    The relationship between technology and children gained attention in academic research in the 1990s. In contrast to academic work on cars and strangers which tended to reach similar conclusions, academic opinion on technology generally covered a broad spectrum. Much work was concerned with understanding the impacts of an indoor lifestyle and spending long periods of time in front of a screen, and the benefits of outdoor activity, but much also looked at the benefits technology could have for young people.191 The conversation was new and the consequences for childhood were far-reaching, complicated, and mostly speculative at this stage. As David Buckingham observes, the discussion was ‘marked by a kind of schizophrenia that often accompanies the advent of new cultural forms. If we look back to the early days of the cinema, or indeed to the invention of the printing press, it is possible to identify a similar mixture of hopes and fears’.192

    Early academic interest in technology often had a parallel interest in late-20th century neoliberalism, closely relating the two. For example Ulrich Beck’s Risk Society (1992) drew a connection between the increasingly diverse and individualised ways people were able to consume media and a broader ‘western trend towards individualisation’.193 Whether technology was accelerating this trend or was a symptom of it is beside the point, but it certainly was the case that the majority of the modern technology sector in both the UK and US was born into a market seeing widespread deregulation of financial services and emphasis on private over public ownership.194 Many scholars during this period castchildren as innocent participants in a process of their own decline, pushed towards embracing a technological ‘media environment’ that was damaging for their healthy development. Sandra Calvert’s Children’s Journeys through the Information Age (1999) and Kirsten Drotner’s Dangerous Media? are two examples that described mass media as a ‘moral threat’ to young people.195 Ray Lorenzo’s Too Little Time and Space for Childhood (1992) and Neil Postman’s The Disappearance of Childhood (1994) both identified the TV in particular as part of a wider problem of ‘lost childhoods’, Postman writing that ‘children today are captive in their homes… They are institutionalized, over programmed, information stuffed, TV dependent, “zoned in” and age segregated’.196

    For other academics, though, technological promise resurrected some of the visions of the inter-war years of a connected, intelligent, globalised world. Richard Lanham’s The Electronic Word (1993) argued that digital technologies, with the particular aid of the internet, would enable a mass form of democratic literacy that would allow countries to ‘enfranchise the public imagination in genuinely new ways’, as did John Tomlinson’s Globalisation and Culture (1999).197 Likewise Jon Katz in Post Politics in the Digital Nation (1997) saw the digital as a means of children’s liberation from the increasingly restrictive adult physical world, where computer games and TV technology offered an escape to children from restrictions, a place where they could engage in unstructured play, when their outdoor activities were increasingly timetabled.198 Much of this work was based on the potential of technology however, and such assessments diminished as academic work became increasingly sceptical during the 2000s. This duality of the moment in academic thought between seeing technology in either dystopian or utopian terms was reflected in Todd Oppenheimer’s The Flickering Mind (2003) where he argued that children were on the verge of either being able to harness technology to help them become ‘creative problem solvers’ or falling victim to ‘computerisation and commercialization careening out of control’.199

    This duality was also seen in assessments of the rise of mobile phone use among children. Sonia Livingstone’s Young People and New Media (2002) found that the mobile phone was allowing children to cross ‘hitherto distinct social boundaries’, by allowing them to talk and arrange meetups with kids from different neighbourhoods.200 Marilyn Campbell’s 2005 study demonstrated the negotiating power that a mobile phone could grant children when discussing curfews and boundaries for roaming with their parents, allowing more freedom.201 Simultaneously though, Williams and Williams’ 2003 study suggested that the expectation of parents to be able to communicate with their children at all times created an environment where children felt they had no private space and their mental health suffered for it.202 A 2002 study pointed out how phones could exacerbate existing inequalities as those without a mobile (mostly likely working-class kids) were excluded from the friendships and communities built around them.203 Additionally, as Dominique Pasquier highlighted in Children and Their Changing Media Environment (2001), the problem was not always access to devices like mobile phones, but the skills necessary to operate them; Pasquier found that both girls and working-class households demonstrated a ‘problematic skills gap’ in the use of digital devices.204

    Livingstone also argued that because technology, like the mobile phone, had invalidated many parents’ frames of reference for childhood they had been forced to become ‘involved in a process of negotiation with their children over mutual identities, rights and responsibilities’.205 This contributed, she argued, to the fall of the nuclear family in favour of the ‘democratic family’, wherein traditional parental and child roles of the authority and the subordinate were replaced by a mutual expectation of love, respect, and intimacy.206 However, as Joe Frost reasoned in A History of Children’s Play, changing family formations also meant that people were increasingly moving away from their hometowns once reaching adulthood, isolating their own children from traditional familial networks such as cousins and grandparents.207 Then again, Frost admitted, technology like the phone or internet could facilitate reconnection.208 Tonya Rooney’s Trusting Children (2010) also noted the multiplicity of technology’s impact on childhood freedoms. On the one hand, the monitoring and safekeeping of children in a ‘just in case’ model meant parents gave children more freedom.209 One the other, she warned: ‘Rather than simply “playing it safe”, parents and carers may be depriving children of the opportunity to be trusted and to learn about trusting others, and the opportunity for growing competence and capacity that can result from this’.210

    In 1999, James Wandersee and Elisabeth Schussler coined the term ‘Plant Blindness’ to describe the idea that children were becoming disconnected from the natural world, and therefore unable to recognise or name common species.211 During the 2000s, arguments of this nature became popular in academic work, catalysing around the term ‘Nature Deficit Disorder’ (NDD), coined by Richard Louv in Last Child in the Woods (2005).212 In this work the focus was not explicitly on technology, rather it was on a lack of time being spent outdoors ‘close to nature’, but technology was always very closely associated with this discussion.213 This association was unavoidable as it was clear that time which children of previous generations had spent playing outdoors was being spent by children of current generations watching TV or playing computer games.

    However, this association was somewhat misleading, as it appeared to logically lead to the conclusion that technology was the cause of the problem. This was not really the case. Technology played a role, but parental restrictions based on fears of cars and strangers had already been moving kids inside for decades. The 2003 study Cyberkids found that ‘children overwhelmingly preferred to be outside if the weather and light allowed’ and that the time they spent doing indoor activities like watching TV was a replacement for ‘doing nothing time’, when they or their friends were not allowed outside.214 These results were echoed in a 2014 study for the National Children’s Museum where ‘81% of children said they preferred outdoor play’ to watching TV. 215 The same study also found that 59% of children were not allowed to leave their house unsupervised, and half were not allowed to play in a garden unsupervised.216 A 2020 study for the Biomedical Public Health journal found that ‘screen media activity does not displace other recreational activities’ amongst children, instead concluding the amount of time a child spent playing outdoors was much more dependent on their socio-economic background.217

    Whilst academic work may not have explicitly cited technology as the cause of the growing concern around decreased outdoor play, this was a conclusion that could be easily inferred (and often was by the press) from work that compared the healthiness of kids based on how many hours they spent watching TV, for example.218 The concept of NDD in particular, though not recognised as an official psychological ‘disorder’, gained traction in the 2000s with a wide variety of UK groups such as The National Trust, The Council For Learning Outside the Classroom, and The Children and Nature Network. Louv did not say in Last Child in the Woods that technology caused NDD, but he did use language which implied as much, such as: ‘as electronic technology surrounds us, we long for nature’ and ‘television remains the most effective thief of time’.219 The clear purpose of academic interventions like Louv’s was to argue for a material change in modern childhoods, attempting to draw the attention of experts in government to an issue which they thought not properly recognised. This work was influential in the media, and on the work of charities and organisation like the National Trust, but in the 2000s policymakers did not much address it. In 2010 Nicholas Carr’s The Shallows took a graver view even than Louv, arguing that children’s ‘malleable minds’ were being degraded by exposure to the digital world.220 Carr wrote that: ‘What we’re experiencing is, in a metaphorical sense, a reversal of the early trajectory of civilization… we seem fated to sacrifice much of what makes our minds so interesting’.221

    Work such as Louv and Carr’s gained interest from the media as the issues they raised unsurprisingly played into common parental fears over safety, health, and freedom. Indeed, many authors did not obscure the fact that they regarded resolution of the issue not only as a practical but moral imperative; the solution of course being, as Louv defined it, for children to be ‘reunited with the rest of nature’.222 In translation from academic to public discourse however, an acknowledgement of the multiplicity of childhoods was lost, particularly from a historical perspective. Often reports that were assessing the state of contemporary childhood would recall childhoods of earlier generations, such as that of the authors, and these recollections portrayed dualistic notions of childhood as a ‘now versus then’ phenomenon. For example, Louv’s statement that ‘baby boomers or older, enjoyed a kind of free, natural play that seems… like a quaint artifact’ or Lenore Skenazy’s reference to ‘the freedom we had as kids’.223 The world of child’s play does not stand apart from factors of gender, race, class, region, and ability, but when speaking on what ‘children used to do’, experts, charities, and journalists could overlook this. Furthermore, Holloway and Valentine’s Cyberkids highlighted how computer use was highly controlled and negotiated in homes, and that parents were not at all powerless to prevent children being ‘drawn in’ to screen-time if they wanted to stop them.224 The crux of the problem lay neither with children, parents, or with technology, but with the reality, or the perception of the reality, that children’s outdoor environments were dangerous, and their indoor environments were safe.

    1.5 Conclusion

    Examining the course of expert discourse across this period we can identify common factors that led to children spending less time outdoors than their parents’ generations. Policymakers under Thatcher, Major, and Blair, by encouraging car-oriented expansion and urbanism, the segregation of road and footpath networks, and stringent parental restriction over children reduced the number of outdoor places children could use. By concentrating on the threat of stranger-danger, and later also youth-danger, they encouraged people to view even car-free outdoor environments as unsafe, and children themselves as sometimes the danger that should be kept out. Furthermore, by encouraging technologization at a policy level whilst delegating management of its negative effects to the family level, they left parents to negotiate technology use with their children without accounting for differing family circumstances. The futurist conservatism of experts in government perpetuated existing trends towards cars, stranger-danger fears, and adoption of technology that significantly reduced children’s access to quality outdoor spaces, particularly those in working-class neighbourhoods. Conversely the nostalgic progressivism of academics invoked memories and notions of the past to inform their calls for change.

    Academics’ proposals focussed mostly on cars and strangers and argued for a reclamation of public land for pedestrians and against the moral panics over child safety that had been fuelled by sensationalist media stories. In their arguments for structural change, however, they were also often dismissive of the experiential and emotional knowledge that individual children and parents could provide which – to a certain extent – threatened their authority. Regarding technology many academics were also critical, concentrating on the health risks of screen time and sedentariness, and in doing so they, like experts in government, placed significant emphasis on the idea that it was by the choice of individual children and families that playtime had moved indoors. In truth technology played a supplemental role to cars and strangers. Indeed, despite the efforts of those with expertise, the public perception of the dangers of all three factors was very different to their reality. This was due, in large part, to the role that newspapers and public campaign groups played throughout this period to inform popular opinion. What was the extent of this role? In the next chapter I will explore this subject and explain the origins and legacies of the period’s most influential public scandals, both legitimate and otherwise.

    <- IntroductionChapter 2 ->

    References

    1 Du Gay et al., Doing Cultural Studies: The Story of the Sony Walkman (Milton Keynes: Open University, 1997), 3.

    2 Emanuela Mora et al., ‘Practice Theories and the “Circuit of Culture”: Integrating Approaches for Studying Material Culture,’ Sociologica 13, no. 3 (2019): 59.

    3 Tisdall, A Progressive Education?, 13.

    4 Colin Buchanan, Traffic in Towns. Reports of the Steering Group and Working Group appointed by the Minister of Transport (London: H.M.S.O, 1963), 223.

    5 Michael Dower, ‘Fourth Wave, the Challenge of Leisure,’ Architects’ Journal 20 (1965): 123; ‘Leisureopolis’, Yorkshire Architect May/June (1969).

    6 David Leibling, Car ownership in Great Britain (London: RAC Foundation, 2008), 4.

    7 Department for Transport, Road Traffic Estimates: Great Britain 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/916749/road-traffic-estimates-in-great-britain-2019.pdf.

    8 Department for Transport, Transport Statistics Great Britain: 2011, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/8995/vehicles-summary.pdf; Joe Frost, A History of Children’s Play and Play Environments (London: Routledge, 2010), 202.

    9 Geoffrey Lean, ‘Tories ditch the “car economy”,’ The Independent,21 January, 1996, 21.

    10 John Prescott, speech in the house of commons, 18 May 1989; Department for Transport, Towards a Sustainable Transport System: Growth in a Low Carbon World (London: Department for Transport, 2007): 85.

    11 Prescott, speech in the house of commons, 18 May 1989; Department for Transport, Towards a Sustainable Transport System: Growth in a Low Carbon World (London: Department for Transport, 2007): 85.

    12 Ben Webster, ‘Broken promises leave dozens of towns in queue for a bypass,’ The Times, 11 September, 2006, 16; Department for Transport, Transport Statistics Great Britain: 2011, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/8995/vehicles-summary.pdf; Lean, ‘Tories ditch the “car economy”,’ 21.

    13 John Stewart, Roadblock: How People Power Is Wrecking The Roads Programme (Alarm UK, 1995), 1.

    14 Chris Miller, Environmental Rights: Critical Perspectives (London: Routledge, 2012), 86.

    15 Gerard Gilbert, ‘Motorway madness,’ The Independent, 17 February, 1993. ‘An unlikely alliance has grown up between conservationists, Nimbys and a New Age tribe called the Dongas.’

    16 Kate Evans, Copse: the Cartoon Book of Tree Protesting (Self Published, 1998), 12.

    17 As quoted in: Paul Brown, ‘Prescott Points Buses to Fast Lane,’ The Guardian, 6 June, 1997, 10.

    18 Webster, ‘Broken promises,’ 16.

    19 Leibling, Car ownership in Great Britain, 4.

    20 RAC Foundation, ‘Car ownership rates per local authority in England and Wales,’ https://www.racfoundation.org/assets/rac_foundation/content/downloadables/car%20ownership%20rates%20by%20local%20authority%20-%20december%202012.pdf.

    21 Department for Transport, Road Traffic Estimates: Great Britain 2019, https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/916749/road-traffic-estimates-in-great-britain-2019.pdf.

    22 Patricia Somers and Jim Settle, ‘The helicopter parent: Research toward a typology,’ College and University 86, no.1 (2010): 18.

    23 Margaret Mead, ‘Neighbourhoods and human needs,’ Children’s Environments Quarterly 1, no. 4 (1984); Sheridan Bartlett et al., Cities for Children: Children’s Rights, Poverty and Urban Management (London: Earthscan, 1999); Frost, A History of Children’s Play and Play Environments.

    24 Ibid, 202.

    25 Mayer Hillman and John Adams, ‘Children’s Freedom and Safety,’ Children’s Environments 9, No. 2 (1992): 10-22.

    26 The Royal Society for the Prevention of Accidents, A History of Road Safety Campaigns: Drink Drive, Seat Belts and Speeding (Birmingham: ROSPA, 2018), 1.

    27 Ibid, 3.

    28 Department for Transport, ‘Reported road casualties in Great Britain, provisional estimates involving illegal alcohol levels: 2019,’ https://www.gov.uk/government/statistics/reported-road-casualties-in-great-britain-provisional-estimates-involving-illegal-alcohol-levels-2019/reported-road-casualties-in-great-britain-provisional-estimates-involving-illegal-alcohol-levels-2019.

    29 ROSPA, A History of Road Safety Campaigns, 5.

    30 Hillman and Adams, ‘Children’s Freedom and Safety,’ 11.

    31 ‘Story of THINK!,’ Think!, accessed 15 February 2022, https://www.think.gov.uk/about-think/story-of-think/.

    32 Department for Transport, Traffic Advisory Leaflet 9/99 (London: Department for Transport, 1999).

    33 Muhammad Ishaque and Robert Noland, ‘Making Roads Safe for Pedestrians or Keeping Them Out of the Way?: An Historical Perspective on Pedestrian Policies in Britain,’ The Journal of Transport History 27, no.1 (2006): 123.

    34 Ibid, 129.

    35 Ibid, 131.

    36 Ibid, 132.

    37 Karl Whitney, ‘“A brave new world”: what happened to Newcastle’s dream for a vertical city?,’ The Guardian, 7 February, 2017.

    38 Department for Transport, The Design of Pedestrian Crossings (London: Department for Transport, 1995), 2.

    39 Mark Pinder, ‘The “parallel world” of Newcastle’s walkways,’ photograph, 2017, The Guardian.

    40 StreetPlaygrounds Act1938, Chapter 37, Section 1.

    41 Cowman, ‘Play streets,’ 254.

    42 Tim Gill, ‘Home Zones in the UK: History, Policy and Impact on Children and Youth,’ Children, Youth and Environments 16, no.1 (2006): 91.

    43 Mike Biddulph, Home zones: A planning and design handbook (London: The Policy Press, 2001), 1.

    44 Gill, ‘Home Zones in the UK,’ 92.

    45 Ibid, 93.

    46 House of Commons, ‘Select Committee on Transport, Local Government and the Regions Eighth Report,’ 2002.

    47 House of Commons Transport Committee, ‘Better roads: Improving England’s Strategic Road Network,’ 2014, 7.

    48 Anthony Seldon, Blair’s Britain, 1997-2007 (Cambridge: Cambridge University Press, 2007), 15.

    49 Department of the Environment and Department of Transport, This common inheritance: Britain ‘s environmental strategy (London: HMSO, 1990), 13.

    50 Hillman and Adams, ‘Children’s Freedom and Safety,’ 18-19.

    51 Ibid, 18-19.

    52 Department of Transport, Children and roads: A safer way (London: HMSO, 1990), 16.

    53 Christopher Chope, speech in the House of Commons, 16 November 1990.

    54 Ibid.

    55 Elizabeth Clapp, ‘Welfare and the Role of Women: The Juvenile Court Movement,’ Journal of American Studies 28, no. 3 (1994): 360.

    56 Jon Winder, Designed for Play: Children’s Playgrounds and the Politics of Urban Space, 1840–2010 (London: University of London Press, 2024), 227; Monica Flegel, ‘“Facts and Their Meaning”: Child Protection, Intervention, and the National Society for the Prevention of Cruelty to Children in Late Nineteenth-Century England,’ Victorian Review 33 (2007): 38.

    57 Manchester Guardian, ‘England and Its Playgrounds,’ The Manchester Guardian, 25 November 1927.

    58 Cranwell, ‘Street play and organized space,’ 45.

    59 Platt, The Child Savers, 10.

    60 Ibid, 126.

    61 Ibid, 126; Winder, Designed for Play, 47.

    62 Tisdall, A Progressive Education?, 248.

    63 Ibid, 248.

    64 Ibid, 20.

    65 Moore, Childhood’s domain, 14, 223.

    66 Ibid, 20.

    67 Rob White, ‘Youth and the Conflict Over Urban Space,’ Children’s Environments 10, no.1 (1993): 89.

    Play Streets are discussed in further detail later in this chapter.

    68 Patsy Eubanks Owens, ‘Natural Landscapes, Gathering Places, And Prospect Refuges: Characteristics of Outdoor Places Valued by Teens,’ Children’s Environments Quarterly 5, no.2 (1988): 21.

    69 Pete King and Polly Sills-Jones, ‘Children’s Use of Public Spaces and the Role of the Adult – a Comparison of Play Ranging in the UK, and the Leikkipuisto (Play Parks) in Finland,’ International Journal of Play 7 (2018): 28; Mark Francis and Randolph Hester, The Meaning of Gardens: Idea, Place, and Action (London: MIT Press, 1990), 28.

    70 White, ‘Youth and the Conflict Over Urban Space,’ 85.

    71 Nikolas Rose, Governing the soul: the shaping of the private self (London: Routledge, 1990), 203.

    72 White, ‘Youth and the Conflict Over Urban Space,’ 85.

    73 See these works of the period: Paul Wilkinson, ‘Safety in Children’s Play Environments,’ Children’s Environments (1985); Peter Heseltine, ‘Accidents on Children’s Playgrounds,’ Children’s Environments (1985); Tom Jambor, ‘Risk-Taking Needs in Children: An Accommodating Play Environment,’ (1986); Joe Frost, ‘Play Environments for Young Children in The USA: 1800 – 1990,’ Children’s Environments (1989); Kaj Noschis, ‘Child Development Theory and Planning for Neighbourhood Play,’ (1992); Lorraine Maxwell, Mari Mitchell, and Gary W. Evans, ‘Effects of Play Equipment and Loose Parts on Preschool Children’s Outdoor Play Behavior: An Observational Study and Design Intervention,’ Children, Youth and Environments (2008).

    74 Wilkinson, ‘Safety in Children’s Play Environments,’ 10; Rob Wheway and Alison Millward, Facilitating Play on Housing Estates (York: Joseph Rowntree Foundation, 1997), 15.

    75 Wilkinson, ‘Safety in Children’s Play Environments,’ 10.

    76 Maxwell et al., ‘Effects of Play Equipment and Loose Parts on Preschool Children’s Outdoor Play Behavior,’ 60.

    77 Stephen Wagg, ‘“Don’t Try to Understand Them”: Politics, Childhood and the New Education Market,’ in Thatcher’s Children? Politics, Childhood and Society in the 1980s and 1990s, eds. Jane Pilcher and Stephen Wagg (London: Routledge, 1996), 18.

    78 John Carvel, ‘Blair Relives School Dilemma,’ The Guardian, 20 January 1999, 2.

    79 Wikimedia Commons contributors, ‘File: Campaign – One false move and you’re dead.png,’ Wikimedia Commons, https://commons.wikimedia.org/w/index.php?title=File:Campain-_One_false_move_and_you%27re_dead.png&oldid=127818792 (accessed 3 June, 2021).

    80 Mayer Hillman et al., One False Move: A Study of Children’s Independent Mobility (London: PSI, 1990), 106.

    81 Department of Transport, Children and roads, 3; Letter to The Times on 27 March 1989, from the Chief Constable of Warwickshire writing in his capacity of secretary to the Safety Committee of ACPO, as quoted in Hillman et al., One False Move, 7.

    82 Hillman et al., One False Move, 105.

    83 Ibid, 2.

    84 Roger Hart et al., ‘Introduction,’ Children, Youth and Environments 13, no.1 (2003): I.

    85 Ibid, II.

    86 Mead, ‘Neighbourhoods and human needs,’ 4.

    87 Ward, The Child in the City, 206.

    88 Claire Freeman, ‘Planning and Play: Creating Greener Environments,’ Children’s Environments 12, no. 3 (1995): 382.

    89 Claire Freeman and Elizabeth Aitken-Rose, ‘Future Shapers: Children, Young People, and Planning in New Zealand Local Government,’ Environment and Planning 23 (2005): 233.

    90 Rachel Kaplan, ‘Patterns of Environmental Preference,’ Environment and Behaviour 9 (1977): 213.

    91 Lenore Behar and David Stevens, ‘Wilderness camping: an evaluation of a residential treatment program for emotionally disturbed children,’ Orthopsychiatry 4 (1978): 644, 653.

    92 See also such studies as Hugh Matthews and Melanie Limb, ‘Defining an agenda for the geography of children,’ Progress in Human Geography 23 (1999): 61-90; Jo Boyden and Pat Holden, Children of the Cities (London: Zed, 1991); Jean Lave and Etienne Wenger, Communities of Practice (Cambridge: Cambridge University Press, 1991); Bartlett et al., Cities for Children.

    93 Madeleine Nash, ‘Fertile minds,’ Time 149 (1997): 51.

    94 Cowman, ‘Play streets,’ 254.

    95 Ibid, 233.

    96 Wajcman, Feminism Confronts Technology, 129.

    97 Katrina Navickas, Contested Commons (London: Reaktion Books, 2025), abstract.

    98 Cowman, ‘Play streets,’ 241.

    99 AA Foundation for Road Safety Research, The facts about road accidents and children (London: AA Motoring Trust, 2003), 8; Michael Ungar, ‘Kids Are Safer Outside Than Inside Their Homes,’ Psychology Today, 11 June, 2015; National Highway Traffic Safety Administration, Traffic Safety Facts 2017 Data: Pedestrians: 2019 (Washington: NHTSA, 2019).

    100 Play England, as quoted in: Anushka Asthana, ‘Kids need the adventure of ‘risky’ play,’ The Observer, 3 August 2008, 6.

    101 Stephen Moss, Natural Childhood (Corsham: Park Lane Press, 2012), 13.

    102 Ibid, 13.

    103 Cindi Katz, ‘Power, Space and Terror: Social Reproduction and the Public Environment,’ in The Politics of Public Space, ed. Setha Low(New York: Routledge, 2006), 31-32.

    104 As discussed in 1.2.1.

    105 Crane, Child Protection in England, 33.

    106 Ibid, 33.

    107 Philip Jenkins, Intimate Enemies: Moral Panics in Contemporary Great Britain (New York: Aldine de Gruyter, 1992), 154.

    108 Thérèse Murphy and Noel Whitty, ‘The Question of Evil and Feminist Legal Scholarship,’ Feminist Legal Studies 14, no. 1 (2006): 8.

    109 Crane, Child Protection in England, 6.

    110 Ian Butler and Mark Drakeford, Scandal, Social Policy and Social Welfare: How British Public Policy is Made (Basingstoke: Palgrave Macmillan, 2003), 88.

    Ashley Wroe, Social work, child abuse and the press (Norwich: Social Work Monographs, 1988), 11.

    111 ‘What have we learned? Child death scandals since 1944,’ Community Care, accessed 30 September 2021, https://www.communitycare.co.uk/2007/01/10/what-have-we-learned-child-death-scandals-since-1944/.

    112 Ted Oliver, ‘Baby death case workers rapped,’ Daily Mail, 1 October 1984, 10; Sarah Boseley, ‘Social Workers Denounce Councils,’ The Guardian, 27 July 1985, 3; BBC Radio News, ‘Lambeth social services failure,’ archived by British Universities Film and Video Council, accessed 2 November 2021, http://bufvc.ac.uk/tvandradio/lbc/index.php/segment/0011300005011.

    113 BBC Radio News, ‘Social services criticised over Tyra Henry death,’ archived by British Universities Film and Video Council, accessed 2 November 2021, http://bufvc.ac.uk/tvandradio/lbc/index.php/segment/0000400037008.

    114 Child Abduction Act 1984, Introductory Text (1).

    115 Nick Basannavar, Sexual Violence against Children in Britain since 1965: Trailing Abuse (Basingstoke: Palgrave Macmillan, 2021), 186.

    116 Ibid, 188.

    117 Beatrix Campbell, Secrets and Silence: Uncovering the Cleveland Child Sexual Abuse Cover-up (Bristol: Policy Press, 2023), 5-6.

    118 See analysis of the 1989 Children Act later in this chapter.

    119 Following the publication of this article: Henry Kempe et al., ‘The Battered-Child Syndrome,’ Journal of the American Medical Association 181 (1962): 143-154.

    120 Esther Rantzen, ‘30 Years of ChildLine (1986-2016),’ Witness seminar held 1 June 2016, at the BT Tower, London, transcript held at Modern Records Centre, University of Warwick, Coventry.

    121 Tisdall, A Progressive Education?, 86-87.

    122 Crane, Child Protection in England , 13.

    123 Tisdall, A Progressive Education?, 3.

    124 Roy Kozlovsky, The Architectures of Childhood (Farnham: Ashgate, 2013), 18.

    125 As revealed in the dismissive approach to the issue in: Roy Kozlovsky, The Architectures of Childhood.

    Elizabeth Anderson, The Disabled Schoolchild: a study of integration in primary schools (London: Routledge, 1973).

    126 Crane, Child Protection in England , 6; Lorraine Radford, Child abuse and neglect in the UK today (London: NSPCC, 2011), 11.

    127 Chris Waters, ‘“Dark strangers” in our midst: discourses of race and nation in Britain, 1947–1963,’ Journal of British Studies 36 (1997): 222.

    128 Child Care Act 1980, Part 1, Section 1.

    129 Child Support Act 1991, Part 1, Section 4.

    Academic response to these pieces of legislation will be explored later in this sub-section.

    130 Children Act 1989, Part 3, Section 20 (8).

    131 Peter Rothery, ‘Terminating and Restricting Parental Responsibility,’ Deans Court Chambers, accessed 17 August 2025, https://www.deanscourt.co.uk/articles/terminating-and-restricting-parental-responsibility.

    132 Children Act 1989, Part 1, Section 2.

    133 Ibid, Part 2, Section 8.

    134 Chris Beckett, ‘Waiting for Court Decisions: A Kind of Limbo,’ Adoption & Fostering 24 (2000): 55-62; Children Act 1989, Part 5, Section 8; Nick Allen, Making Sense of the Children Act 1989 (Chichester: Wiley, 2005), 204-205.

    135 Bridget McKeigue and Chris Beckett, ‘Care Proceedings under the 1989 Children Act: Rhetoric and Reality,’ The British Journal of Social Work 34, no. 6 (2004): 835.

    136 Nellie Trickett et al., ‘The Impact of Sexual Abuse on Female Development: Lessons from a Multigenerational, Longitudinal Research Study,’ Development and Psychopathology 23, (2011): 453; Paul Mullen et al., ‘Childhood Sexual Abuse and Mental Health in Adult Life,’ The British Journal of Psychiatry 163 (1993): 721; Lucy Delap, ‘“Disgusting Details Which Are Best Forgotten”: Disclosures of Child Sexual Abuse in Twentieth-Century Britain,’ Journal of British Studies 57 (2018): 79.

    137 Jane Pilcher, ‘Gillick and After: Children and Sex in the 1980s and 1990s,’ in Thatcher’s Children? Politics, Childhood and Society in the 1980s and 1990s, eds. Jane Pilcher and Stephen Wagg (London: Routledge, 1996), 77.

    138 Local Government Act 1988, Part 4, Section 28 (1).

    139 Crane, Child Protection in England, 85.

    140 ‘Child Protection Charities,’ Charity Choice, accessed 3 August 2021, https://www.charitychoice.co.uk/charities/children-and-youth/child-protection.

    141 Other charities of this nature from the North East include: Children North East, The Children’s Foundation, Being Children, and Home Start Teeside.

    142 Central Office of Information, ‘Think Bubble – Adult & Child,’ public information film aired 1994, 1 minute, National Archives.

    143 Pilcher, ‘Gillick and After,’ 77.

    144 Eve Colpus, ‘30 Years of ChildLine (1986-2016),’ witness seminar held 1 June 2016, at the BT Tower, London, transcript held at Modern Records Centre, University of Warwick, Coventry.

    145 Shaun Woodward, ‘30 Years of ChildLine (1986-2016),’ witness seminar held 1 June 2016, at the BT Tower, London, transcript held at Modern Records Centre, University of Warwick, Coventry.

    146 Woodward, ‘30 Years of ChildLine (1986-2016),’ witness seminar.

    147 Matthew Hilton et al., A Historical Guide to NGOs in Britain: Charities, Civil Society and the Voluntary Sector since 1945 (Basingstoke: Palgrave Macmillan, 2012), 84.

    148 Other examples of popular academic texts of this nature include: Howard Gadlin’s Child Discipline and the Pursuit of Self (1978), Marion Shoard’s The Theft of the Countryside (1980) and This Land is Our Land (1987), Nikolas Rose’s Governing the soul (1990), Ray Lorenzo’s Too Little Time and Space for Childhood (1992), Sheridan Bartlett et al.’s Cities for Children (1999).

    149 Nigel Parton, ‘The Beckford Report: A Critical Appraisal,’ The British Journal of Social Work 16, no. 5 (1986): 569.

    150 Alex Mold and Virginia Berridge, Voluntary Action and Illegal Drugs: Health and Society in Britain since the 1960s (Basingstoke: Palgrave Macmillan, 2010), 22.

    151 Kim Susan Blakely, ‘Parents’ Conceptions of Social Dangers to Children in the Urban Environment,’ Children’s Environments 11 (1994): 23.

    152 Ibid, 24.

    153 Rachel Pain, ‘Paranoid Parenting? Rematerializing Risk and Fear for Children,’ Social & Cultural Geography 7 (2006): 221.

    154 Katz, ‘Power, Space and Terror,’ in The Politics of Public Space, 31-32.

    155 Susan Handy et al., ‘Neighborhood Design and Children’s Outdoor Play: Evidence from Northern California,’ Children, Youth and Environments 18, no. 2 (2008): 162.

    156 Harry Hendrick, Children, Childhood and English Society, 1880–1990 (Cambridge: Cambridge University Press, 1997), 98-99.

    157 Nicci Gerrard, ‘The mob will move on, the pain never can,’ The Observer,3 May 1998.

    158 Tim Newburn, ‘Back to the Future? Youth Crime, Youth Justice and the Rediscovery of “Authoritarian Populism”,’ in Thatcher’s Children? Politics, Childhood and Society in the 1980s and 1990s, eds. Jane Pilcher and Stephen Wagg (London: Routledge, 1996), 64.

    159 John Major, ‘Back to Basics,’ transcript of speech delivered at Conservative Party Conference, Blackpool, 8 October, 1993, http://britishpoliticalspeech.org/speech-archive.htm?speech=139.

    160 Barry Goldson, ‘“Difficult to Understand or Defend”: A Reasoned Case for Raising the Age of Criminal Responsibility,’ The Howard Journal of Criminal Justice 48 (2009): 514.

    161 Tony Blair, speech in the House of Commons, 11th January 1994.

    162 Ministry of Justice, ‘Youth Justice Statistics,’ GOV.UK, accessed August 22, 2025, https://www.gov.uk/government/collections/youth-justice-statistics; Barry Goldson, ‘A Reasoned Case for Raising the Age of Criminal Responsibility,’ The Howard Journal of Criminal Justice 48, no 5 (2009): 514.

    163 Tim Bateman and Neal Hazel, Youth Justice Timeline (Manchester: Beyond Youth Custody, 2014), 3.

    164 Crime and Disorder Act 1998, Part 1, Section 1 (1A).

    165 Crime and Disorder Act 1998, Part 1, Section 8.

    166 Anti Social Behaviour Act 2003, Part 4, Section 30 (3).

    167 Department for Children, Schools and Families, The Children’s Plan: Building brighter futures (London: Department for Children, Schools and Families, 2007), 5.

    168 Simon Hughes, Commons Debate, 10 June 2008; Jack Straw, Commons Debate, 10 June 2008.

    169 Ishita Pande, ‘Is the History of Childhood Ready for the World? A Response to “The Kids Aren’t All Right”,’ The American Historical Review 125, no. 4 (2020): 1304.

    170 Brian Hitchen, ‘How do you feel now you little bastards?,’ Daily Star, 25 November 1993, 1.

    171 Yago Zayed, TV License Fee Statistics (London: House of Commons Library, 2022), 7.

    Jennifer Zosh, Learning in the Digital Age: Putting Education Back in Educational Apps for Young Children (Montreal: Encyclopaedia on Early Childhood Development, 2016), 1.

    172 Peter Buchner, ‘Growing Up in Three European Regions,’ in Growing Up in Europe: Contemporary Horizons in Childhood and Youth Studies, ed. Lynne Chisholm (Berlin: Gruyter, 1995), 47.

    173 Petroc Taylor, ‘UK Households: Ownership of Internet Connection 1998–2018,’ Statista, published January 18, 2023, accessed July 22, 2025, https://www.statista.com/statistics/369035/uk-households-internet-connection/.

    174 Ellen Helsper and Sonia Livingstone, ‘Gradations in Digital Inclusion: Children, Young People and the Digital Divide,’ New Media & Society 9 (2007): 684.

    175 ‘Gender Insights in Computing Education,’ National Centre for Computing Education, April 2023, 7.

    176 Finlo Rohrer, ‘In praise of summer mischief,’ BBC News Magazine, accessed 15 February 2022, http://news.bbc.co.uk/1/hi/magazine/7510372.stm.

    177 Matthew Kelly, written message to author, August 2025.

    178 Mark Aldridge and Andy Murray, T is for Television (London: Reynolds & Hearn, 2008), 38-41.

    179 Office for National Statistics, ‘170 Years of Industrial Change across England and Wales,’ The National Archives, accessed 15 February 2022, https://webarchive.nationalarchives.gov.uk/ukgwa/20160106001413/http://www.ons.gov.uk/ons/rel/census/2011-census-analysis/170-years-of-industry/170-years-of-industrial-changeponent.html.

    180 Margaret Thatcher, ‘Speech on Microcomputers in Schools,’ transcript of speech delivered at unknown location, 6 April, 1981, https://www.margaretthatcher.org/document/104609.

    181 Tony Blair as quoted in: Ewen MacAskill, ‘Blair’s Promise: Everyone,’ The Guardian, 2 October, 1996, 6.

    182 Margaret Thatcher, ‘Remarks visiting TV South,’ transcript of speech delivered at Vinters Park, Maidstone, Kent, 6 January, 1984, https://www.margaretthatcher.org/document/105595.

    183 Communications Act 2003, Part 3, Section 4.

    184 Kevin Browne and Catherine Hamilton-Giachritsis, ‘The influence of violent media on children and adolescents: a public-health approach,’ The Lancet 365 (2005): 702-704.

    185 Frost, A History of Children’s Play and Play Environments, 229.

    186 Martin Herbert, Clinical Child Psychology: Social Learning, Developments and Behaviour (London: Wiley-Blackwell, 1991), 170.

    187 Moira Bovill and Sonia Livingstone, Families and the Internet: An Observational Study of Children and Young People’s Internet Use (London: British Telecom, 2001), 31.

    188 Cora Martin and Leonard Benson, ‘Parental Perceptions of the Role of Television in Parent-Child Interaction,’ Journal of Marriage and Family 32, no. 3 (1970): 411.

    189 Norman Warner, speech in the House of Lords, 16 November, 2004.

    Al Aynsley-Green as quoted in: ‘Mothers ‘must learn obesity risk’,’ BBC News, accessed 15 February 2022, http://news.bbc.co.uk/1/hi/uk_politics/6078490.stm.

    190 Gary Varvel, ‘Nature Deficit Disorder,’ Indianapolis Star, January 2007.

    191 Ann Heilmann et al., ‘Longitudinal associations between television in the bedroom and body fatness in a UK cohort study,’ International Journal of Obesity 41 (2017): 1503; Alice Goisis et al., ‘Why are poorer children at higher risk of obesity and overweight? A UK cohort study,’ European Journal of Public Health 26 (2015): 8.

    192 David Buckingham, ‘New media, new markets, new childhoods? Children’s changing cultural environment in the age of digital technology,’ in An Introduction to Childhood Studies, ed. Mary Jane Kehily (Albany: SUNY Press, 2015), 158.

    193 Beck, Risk Society, 10.

    194 Stephen Segaller, Nerds 2.0.1 (New York: TV Books, 1999), 178.

    195 Sandra Calvert, Children’s Journeys through the Information Age (Pennsylvania: McGraw, 1999); Kirsten Drotner, ‘Dangerous Media? Panic Discourses and Dilemmas of Modernity,’ Paedagogica Historica 35 (1999): 593-619; Frost, A History of Children’s Play and Play Environments, 199.

    196 Ray Lorenzo, Too Little Time and Space for Childhood (UNICEF International Child Development Centre, 1992).

    Neil Postman, The Disappearance of Childhood (New York: Delacorte, 1994), 52.

    197 Richard Lanham, The Electronic Word: Democracy, Technology and the Arts (Chicago: University of Chicago Press, 1993)¸ 60; John Tomlinson, Globalization and Culture (Cambridge: Polity Press, 1999).

    198 Jon Katz, Media Rants: Post Politics in the Digital Nation (San Francisco: Hardwired, 1997), 159.

    199 Oppenheimer, The Flickering Mind, 218.

    200 Livingstone, Young People and New Media, 20.

    201 Marilyn Campbell, ‘The impact of the mobile phone on young people’s social life,’ in Social Change in the 21 Century, ed. Karen Barnett (Queensland: Queensland Press, 2005), 10.

    202 Stephen Williams and Lynda Williams, ‘Space Invaders: The Negotiation of Teenage Boundaries through the Mobile Phone,’ The Sociological Review 53 (2003): 314-331.

    203 Tony Charlton et al., ‘Mobile Telephone Ownership and Usage among 10- and 11-Year-Olds,’ Emotional and Behavioural Difficulties 7 (2002): 156.

    204 Dominique Pasquier, ‘Media at home: domestic interactions and regulation,’ in Children and Their Changing Media Environment, ed. Sonia Livingstone (London: Routledge, 2001), 145.

    205 Buchner, ‘Growing Up in Three European Regions,’ 47.

    206 Livingstone, Young People and New Media, 169.

    207 Frost, A History of Children’s Play and Play Environments, 239; Freeman and Tranter, Children and Their Urban Environment: Changing Worlds, 23.

    208 Frost, A History of Children’s Play and Play Environments, 239.

    209 Tonya Rooney, ‘Trusting children: How do surveillance technologies alter a child’s experience of trust, risk and responsibility?,’ Surveillance & Society 7 (2010): 344.

    210 Ibid, 350.

    211 James Wandersee and Elisabeth Schussler, ‘Preventing Plant Blindness,’ The American Biology Teacher 61, (1999): 82.

    212 Ian Rotherham, Cultural Severance and The Environment (Dordrecht: Springer, 2013), V; Louv, Last Child in the Woods, 3.

    213 See many texts from this period including: Louv, Last Child in the woods; Carr, The shallows; Skenazy, free range kids; Frost, A history of children’s play ; Lorenzo, Too Little Time and Space for Childhood; Postman, The Disappearance of Childhood; Pamela Riney-Kehrberg, The Nature of Childhood: An Environmental History of Growing Up in America Since 1865 (Lawrence: University Press of Kansas, 2014).

    214 Sarah Holloway and Gill Valentine, Cyberkids: Children in the Information Age (London: RoutledgeFalmer, 2002), 51.

    215 Rebecca Caswell and Tom Warman, Play for Today (Halifax: The National Children’s Museum, 2014), 5.

    216 Ibid, 5.

    217 Briana Lees et al., ‘Screen Media Activity Does Not Displace Other Recreational Activities among 9-10 year-old Youth: A Cross-sectional ABCD Study,’ BMC Public Health 20, no. 1 (2020): 1792.

    218 Heilmann et al., ‘Longitudinal associations between television in the bedroom and body fatness in a UK cohort study,’ 1503. For press see: Christine Ro, ‘Why ‘plant blindness’ matters,’ BBC, 29 April 2019, accessed 4 June 2020, https://www.bbc.com/future/article/20190425-plant-blindness-what-we-lose-with-nature-deficit-disorder; Neil Midgley, ‘The explosion of countryside TV helping treat our “nature deficit disorder”,’ The Guardian, 27 March 2016, accessed 4 June 2020, https://www.theguardian.com/media/2016/mar/27/countryfile-bbc-nature-deficient-disorder; Tamara Cohen, ‘Let Granny teach our children how to play outside, say National Trust as it warns of “nature deficit” curse,’ The Daily Mail, 30 March 2012, accessed 4 June 2020, https://www.dailymail.co.uk/news/article-2122543/Nature-deficit-latest-curse-hit-children-obesity-health-safety.html; Alice Wilkinson, ‘Have you got Nature Deficit Disorder? Then ditch the gym – it’s time to get outdoors,’ The Telegraph, 6 February 2017, accessed 4 June 2020, https://www.telegraph.co.uk/health-fitness/body/have-got-nature-deficit-disorder-ditch-gym-time-get-outdoors/; Caroline Lucas and Mary Colwell, ‘Children are developing a nature deficit disorder,’ The Sunday Times, 25 October 2018, accessed 4 June 2020, https://www.thetimes.co.uk/article/children-are-developing-a-nature-deficit-disorder-z50bhflxz.

    219 Louv, Last Child in the Woods, 73, 127.

    220 Nicholas Carr, The Shallows: What the Internet Is Doing to Our Brains (New York: W.W. Norton, 2010), 32.

    221 Ibid, 141.

    222 Louv, Last Child in the Woods, 3.

    223 Ibid, 33.

    Lenore Skenazy, Free Range Kids (San Francisco: Wiley, 2009), 228.

    224 Holloway and Valentine, Cyberkids, 52.

  • Devoted Admirers and Bitter Enemies – Assessing our current understanding of T. Dan Smith Part 3: Sources

    If you are unfamiliar with the history of T Dan Smith, it is advised that you read part 1 (‘Local Hero or Corrupt Councillor?’) and part 2 (‘Which Way to Utopia’) before reading this.

    Thomas Daniel Smith will always exist as a controversial figure, a polariser of public opinion. For the people of Newcastle-upon-Tyne his legacy is unavoidable, it is spread all over the city in the form of vast swathes of concrete, often ill-repaired and forgotten, which speak of a separate world from that of the Victorian terraces and Georgian streets. For some, his imprisonment on charges of corruption in 1974 was a justice well served to a politician who had taken advantage of the city to better himself and those close to him. For others, and for Smith himself, he had been pilloried by a political establishment who saw him as a threat to their own authority and made the scapegoat for higher ranking officials who were the truly corrupt forces at play. As he put it, he had to be disposed of because he ‘wasn’t one of them’. The three sources I investigate herein have been chosen to help me examine the character of T. Dan Smith in public life and understand how the public perception of him has been informed by the media that portrayed him. Primarily however I examine the man’s vision and assess whether his passionate talk of creating a ‘new Brasilia’ was merely a show to cover his untoward deeds or whether it did truly represent a devotion to positive change. Ultimately, I will conclude that these sources indicate that he did.

    The earliest of my three sources dates from 1969, before allegations of Smith’s involvement in corruption came to the fore. It is a booklet distributed by the Northern Economic Planning Council (NEPC) which sets out its vision for north-east development over the next 12 years up toward 1981. Smith was the chairman and head of the NEPC at this time and he wrote in the foreword to the document that it should be ‘given the widest possible circulation within the region’. It is a statement of desire for the future and I will be assessing it to determine exactly what Smith envisioned he could achieve in the region if his tenure had not been cut short by a jail sentence 6 years later. It also shows us what Smith though most important to communicate to the people of the region.

    The second of my three sources dates from 1971, three years before Smith would be convicted on charges of corruption, however it does still originate from a time when Smith was embroiled in allegations of corruption brought against him. The source itself is a correspondence between Smith and Henry Parris (Smith’s friend and a lecturer in politics at Durham university) and it follows straight after a 1971 corruption trial for which Smith was found not guilty. Parris’ letter to Smith also includes an attached newspaper article from the guardian which he explains is from where he learned about Smith’s trial. I will be using this source to assess public attitudes towards Smith before he pleaded guilty to corruption and to examine how he was depicted in the media during this tumultuous time.

    My final source dates from 1987, after Smith had been released from jail and somewhat receded from the public eye. It is a documentary film produced by Newcastle-based studio “Amber Films” which includes interviews with Smith and with several other persons from his history. From this source we get to assess Smith’s responses to the allegations made against him and how his legacy was remembered publicly after his spotlight had since faded.

    The NEPC booklet does not provide an optimistic view of the future for the north-east, indeed, it is resolved to a process of damage mitigation; using phrases such as ‘the region cannot hope to halt net outward migration’ and ‘heavy job loss… will almost certainly continue into the mid-1970s’. At initial observation this does not align well with T. Dan Smith’s notion that he could turn Newcastle into the ‘Venice of the north’, as he would sometimes claim. However, reading further into the NEPC’s policy proposals reveals that this is not the case. This is because the document outlines a strategy of concentrating all resources towards certain key areas of economic activity within the region at the expense of others; Newcastle being the foremost of these chosen areas. In astonishing brusqueness, as this document was made widely available to the general public, it pronounces that ‘There is no point in pretending that all the communities which now exist in the region will be capable of surviving’ and states that ‘this policy would accept… a gradual rundown of some of the less favourably placed communities’. The spines around which these ‘growth areas’ will be built were the proposed ‘growth corridors’; an idea based on the conclusion that roads are critical to the economy and that investing in road infrastructure will bring wealth to the area that surrounds them. This came to Smith through his city planner Wilfred Burns who took much inspiration from modern American cities. This is why most of the “new towns” built during this period, such as Washington and Meadowfield, were deliberately based around ‘motorway interchange points’. Comparing the map showing proposed ‘growth corridors’ against a concurrent plan for road extensions reveals how exactly these two plans align. Indeed, the ‘main growth corridor’ aligns exactly with the line of the A1 motorway.

    This document reveals the harsher side of Smith’s visionary rhetoric. Within this document genuine and practical belief is shown in the ability of Newcastle to foster ‘seeds of future growth and prosperity’ but this is acknowledged only at the expense of other areas in the region. What this does help us acknowledge however, is the practicality at the heart of Smith’s vision; he does not outline a perfect scenario and nor does he claim to hold all the answers, but he does want to attempt an ambitious strategy for turning the fortunes of the region around. This is not a document produced by a man who does not understand the realities of his situation and is a demonstration of Smith’s unreservedness in communicating to the people of the area what he intends to realistically achieve.

    It is interesting therefore to see in the newspaper article attached to the letter sent from Henry Parris to Smith how the media of the time portrayed this unreserved vision. What is immediately communicated through this source is the real respect and influence he had garnered for himself despite the allegations brought against him. He is described as the man who ‘virtually invented regionalism’ and the article plays up his “rags to riches” tale, a man ‘unfettered by formal education’. The context of this article within this correspondence via letter also enforces the conclusion that Smith was well thought after, a foreword stating that he had received ‘hundreds of messages’ of support. Smith’s determination to carry on with his work is also evident here, which he states very plainly in both his newspaper interview and his personal reply to Parris: ‘I am planning my diary again with confidence’. It is interesting and slightly sad to note this optimism in the knowledge of his imprisonment that would follow this correspondence only 3 years later.

    However, there is a clear subtext to pick up when assessing this source which does imply that Smith had his detractors at this time, which he certainly did. A fault of this source is that it does not fully represent the other side of the argument as it only includes comment from those predisposed towards Smith: his friend and a newspaper which broadly aligns with himself politically. Nonetheless when Parris writes that ‘I felt I could not remain silent’ or when the paper writes that Smith commanded over a ‘reluctant Newcastle-upon-Tyne’ it is made clear that Smith was not universally admired. Indeed, overall across all of the three primary sources I examine the picture developed of T. Dan Smith is a mixed one, a man who attracted ‘devoted admirers and bitter enemies’. 

    Certainly the 1987 Amber documentary brings the conflicting elements of Smith’s career to the fore, using its two narrators as points from which to argue the two sides of the story; one a detractor and the other a supporter. The film shows a society which finds it difficult to come to grips with Smith’s character, a man who cannot be simply labelled into the category of hero or villain. In particular its interviews with members of the general public of Newcastle prove enlightening; an ambivalent atmosphere flows through all of these conversations which seem to both condemn and praise at the same time. One man states: ‘He did a wonderful job for the town, but unfortunately he was found out’ and another: ‘to me he’s a criminal… but now he’s more or less hailed as a hero’. The interviews with Smith himself show a very different character to that seen in the newspaper and the NEPC document. These detail him as a bold and harsh character, not afraid to offend in order to get things done. Now, after his time in jail, the man set before us is defensive and calculating, carefully choosing his words to justify and explain his former selves. In regard to the public opinion of himself, he says that people seem to ‘reflect the society that was able to convince them of what ever they want to be convinced of’, in other words, he feels the public have been misled as to his role in the wider corruption scandal involving the architect John Poulson and the conservative home secretary Reginald Maudling. His view is now that he was made a scapegoat for ‘bigger fish’, and although he did plead guilty to charges of corruption he now claims to be innocent.

    Certainly, his decision to appear on such a program does lend a legitimacy to his assertions. We see him now, still living in Newcastle, in one of the very concrete towers he laid the foundations for. He lives no ‘life of luxury’, as one of the citizens claims, he appears as he proclaims himself, cast out. Overall the feature does portray Smith in a sympathetic light and does not take such a balanced view as it wishes you to believe it is taking. It is another example, as with the newspaper, the letter, and the NEPC booklet, of people being swept along in the wake of T. Dan Smith. Through all of the sources his personal drive and commitment appears infectious on those around him, a naturally likeable character.  

    What these sources all indicate is that T. Dan Smith did have a genuine vision and love for his home city. As to the extent of his involvement in the corruption scandal I do not believe these sources provide enough information to make an informed assessment of that. What they do show is that across time, from his heyday to his downfall to the present, he remains a man who can inspire passion in those around him.

    Author/Publisher: Louis Lorenzo

    First Published: 3rd of August 2018

    Last Modified: 3rd of August 2018

  • Which Way to Utopia? Assessing Our Current Understanding T. Dan Smith Part 2: Historiography

    If you are unfamiliar with the history of T Dan Smith, I recommend that you read part 1 (‘Local Hero or Corrupt Councillor?’) before reading this.

    For this article I have chosen eight pieces of historiography surrounding the life of T. Dan Smith to review. Due to the relatively small amount of published material on T. Dan Smith I believe these to be sufficient to fully cover the historiography.  Primarily, I will be assessing how attitudes towards Smith’s legacy have evolved over the years, relating to his charges of corruption and his personal vision for the city of Newcastle-upon-Tyne. I highlight a clear transition in the historiography, which progresses from the notably negative appraisals of the early 1980s towards the far more positive revisions of later years; all culminating in Chris Foote-Wood’s 2010 text which boldly proclaims Smith as ‘NOT GUILTY AS CHARGED’. I will finally evaluate whether the historiography overall has provided a conclusive narrative on the figure of T. Dan Smith, and decide that there are still many unanswered questions surrounding his character to be explored.

    The earliest major text to be released about T. Dan Smith is his own: “An Autobiography”, which was released in 1970 before any corruption charges were brought against him. In contrast to later sources Smith’s own text carries a far more nonchalant tone, not cast in the harsh light of criminality. Indeed, Smith gives the impression of a man stepping back from ‘my public life’; by its very nature an autobiography indicates a conclusion, a summary of one’s achievements in the assumption of the best being behind you. One consistency with later sources is the emphasis placed on Smith’s working-class youth and the impact of the Second World War (WW2) on Smith’s life (perhaps because the later sources had no other source material to work with apart from Smith’s recollections). What Smith emphasises is his love of the radical politics of the post-WW2 era which gave birth to the national health service, and his disappointment in how quickly after the fact politicians turned instead to ‘petty things’. Certainly Smith casts himself as somewhat of a visionary, someone who had brought us back to ‘those radical days of 1945’. He regards the brutalist architecture executed under his stewardship, a factor so often used against him in later evaluations, as a prime example of this future-facing attitude, describing the buildings as being of ‘the highest standard and best design’.

    In 1975 a small article was released from an Australian university interested in local government studies which discussed Smith’s political career. However, it makes no reference to the corruption trial that had taken place only one year hence. Furthermore, the article is particularly positive, saying that Smith ‘engendered an optimistic and dynamic attitude’ in the city council. If nothing else, this article shows the unusual impact Smith had on the political sphere in his time; few leaders of local councils are written extensively about at all, never mind from across the globe.

    After this point no major evaluations of T. Dan Smith are released until, in typical form, two come very close together in the early 1980s. These are by far the most condemnatory texts on Smith to be released. The first of these, “Nothing to Declare”, primarily focusses on John Poulson, but the text devotes an entire section to Smith. It casts him as a young visionary socialist with good intentions who lets himself get corrupted by Poulson and the system at large. It proclaims that ‘by the end the vision was gone, replaced with tawdry self-interest’. In the context of itself the text builds a strong case against Smith, using extracts from letters sent between Smith and various other figures as evidence. In the most damning of these correspondences Smith writes that a potential employee must ‘be unaware of any tie between J. G. L Poulson and me’. However, the provenance of these extracts is often unclear and always appears mixed in with passages of hearsay and speculation which somewhat degrades the argument. Smith is quoted as having said ‘I support the building of council houses, but that does not mean I want to live in one’ but no source is provided. Indeed, later in life Smith did live in a council house. There is a distinct sense that the main desire here is to neatly slot Smith in to the overall narrative on Poulson presented, describing Smith as Poulson’s ‘chief lieutenant’, Poulson of course being the ‘arch corruptor’.

    The second of these texts, “Web of Corruption”, places Poulson and Smith in the exact same relationship, even using the term ‘lieutenant’ in just the same context. However, the differences between these two texts are more pronounced than may initially appear, because although they hold similar sentiments towards Smith, they are marketed towards different audiences. Web of Corruption is a text aimed toward a far wider audience than Nothing to Declare, the large red font over the bright yellow cover immediately catching the eye. Its attacks on Smith are a viscious spectacle, describing him as a ‘con’ who was nothing more than a ‘moderately gifted amateur’. It paints a vivid image of a ‘socialist hero’ who has fallen from grace to become ‘unemployed but almost friendless, isolated but not ignored’. This text more than any other demonstrates the public appeal of Smith’s character and case, a man who had lived such a public life now finding his fame turned against him. Rather hypocritically the text goes on to criticise Nothing to Declare for sensationalism, describing it as looking for ‘another Watergate conspiracy’. Contrarily I believe Web of Corruption to be a far more sensationalist piece, looking to attack anyone associated with the Poulson scandal. No references are provided and in all the text is more interested in human tragedy over hard evidence.

    In contrast to these highly critical texts, the later 1980s saw a selection of material which did not cast Smith in as much of a devilish tone. Two documentaries produced at similar times both seek to somewhat revaluate his character. They do not claim that he was innocent of corruption but do begin to frame this affair as a case not so firmly closed. The first of these, a 1986 British broadcasting corporation (BBC) production entitled “T. Dan Smith”, describes him as a good man manipulated by private business into doing bad things. The key difference here is that it does not describe him as being “corrupted”, he himself has not turned onto a dark path; instead others have manipulated him. The film also gives more time to his achievements, casting him as the ‘pioneer and prophet’ of local government and giving him the title of ‘a modern crusader’. This together with the second of these documentaries from Amber Films gives the impression that, after enough time had passed, the public were willing to reassess Smith’s character.

    The 1987 Amber Films production, “A Funny Thing Happened on the Way to Utopia”, takes a similar position to the BBC film; although where the BBC film takes a more neutral stance, A Funny Thing is quite sympathetic towards Smith. The style of the production as half documentary half drama points, in the same fashion as Web of Corruption, to the popular appeal of T. Dan Smith. His story is exciting and his personal involvement as both a character and an interviewee highlights his own desire to be fictionalised. A Funny Thing takes great interest in the conspiracy surrounding the idea that Smith was made a scapegoat for higher powers, such as Reginald Maudling and the privy council, an idea that Smith is happy to engage with in proclaiming that he had been ‘fitted out’ because he ‘wasn’t one of them’. There is talk of a ‘power above parliament’, a dark underworld which still remains uncovered. What strongly comes across in the interviews with Smith is how incredulous he feels in how he has been treated by the media, angry at an injustice done against him. Overall the film does not acquit Smith of guilt but makes no compromise in eulogising his vision for the city as a ‘new Brasilia’, although concluding that the result of his work did not match his ambition.

    In 1993 a thesis entitled “The New Brasilia?” takes a further step toward favouring Smith. The thesis does not tackle the allegations of corruption but does seek to assess whether Smith’s vision for the city was ‘harmful or beneficial’, and then to conclude if he did ‘realise [his] goals’. It also goes somewhat into the state of Tyneside before Smith took over in 1965, noting that unemployment at that time was twice the national average, a statistic which Smith helped reverse. In its conclusion the thesis is surprisingly positive, describing Smith’s assessment of the issues that Newcastle faced as ‘basically correct’ and his solution as a ‘relative success’. The New Brasilia decides that, in the context in which it was carried out, Smith’s redevelopment was good for the city in its ability to stimulate the local economy and deal with traffic problems, and that his vision was sound.

    Ultimately, we are brought to the 2010 text “Voice of the North”, which inverts all assessments of Smith thus far. The text frames itself as a “myth buster”, carefully going through all the allegations made against Smith and rebuffing them. It seeks to reinstate Smith as the proud figure of north-eastern regionalism he once was before the 1970s. The text goes through many of Smith’s achievements which are not discussed in any of the previous sources, such as his opposition to modern developments on the picturesque Grey street and to the bulldozing of the holy Jesus hospital which was shockingly described by the northern architectural association (NAA) as ‘not of the first importance’. It also goes to lengths to disassociate Smith from many of the concrete edifices so often linked to his name, pointing out how many of these were built after his tenure. Most strikingly of course, the text absolves Smith of his crimes entirely, favouring the view that he was ‘ground down’ into confessing his guilt by the press, the public, and other politicians. In regards to the specific case for which Smith was charged I find the argument convincing that he was indeed not guilty but overall I do not find myself persuaded as to his innocence in other matters, and indeed upon the book’s release there was resistance to this notion. Although I would regard this as the most historiographically sound of the sources I have reviewed I do believe it’s take on T. Dan Smith to be slightly too reverential. It is understandable why this may happen as a reaction to the previously overly-negative material, but a more balanced view would be appropriate.

    Across these eight pieces of historiography a clear progression is visible surrounding our appraisals of T. Dan Smith. In entirety this has been a trend towards the positive, both in terms of his vision and his charge of corruption. In all it is clear however that no conclusive narrative has been produced on the figure of T. Dan Smith, and questions over his innocence and his intentions still remain.

    Part 3 will look at the primary source material we have available about Smith and will ask whether it is substantial enough to come to some judgements about his case and character.

    Author/Publisher: Louis Lorenzo

    First Published: 21st of May 2018

    Last Modified: 1st of June 2018 (Grammar Corrections)

  • Local Hero or Corrupt Councillor? Assessing Our Current Understanding T. Dan Smith Part 1: Introduction

    This article will initially seek to introduce you to T. Dan Smith and the debates that exist surrounding his political and personal careers. It is deliberately short and omits much fine detail however this is in service of its central aim: to peak your interest in this topic and spark some enthusiasm for the mysteries presented herein.

    To summarise: Smith was born in 1915 to a working-class family in the city of Newcastle-upon-Tyne in the north of England. His parents were communists, and in his youth, Smith joined the revolutionary communist party himself. He was a conscientious objector to the second world war and honed his skill for oration during this time whilst making impassioned speeches which criticised the British state. At the back of all his speeches he would always spot the same man, whom he would later discover was an MI5 operative tasked with watching him.

    After the war he moved away from communism toward the labour party and rising through the ranks eventually found himself elected as leader of the city council in 1959. His time in office is controversial. As such I will present to you here the two narratives you will generally encounter when looking at Smith, the positive and the negative.

    The Positive

    T Dan Smith had a vision for Newcastle as a city reborn, the ‘Venice of the north’, and he hired planners and architects from around the world to build his new utopia. Newcastle became the first city in the UK to clear all its slums, and in their place tall towers were erected alongside modern blocks of offices and flats. Many of these were efforts of much-needed social housing, including the famous Byker wall. The whole road network was redesigned in tandem with the pedestrian footways; the entire structure planned to completely separate pedestrians and cars through an intricate network of tunnels and “sky walkways”. Smith also oversaw huge investment into local arts institutions, as well as the creation of the city’s first university and a profitable shopping centre. Few cities in the modern day see such investment. He is known as ‘the inventor of regionalism’ for his refusal to move to a better paid job in Westminster in favour of standing up for his home region, little wonder he was known as ‘the voice of the north’.  He was wrongfully accused of corruption, taken advantage of by the truly corrupt forces above him in higher government who didn’t like that he was a strong independent voice for the north that didn’t play nice with the political establishment.

    The Negative

    T Dan Smith almost ruined the city of Newcastle. The elegant Georgian streets were demolished in favour of monstrous grey blocks of concrete. The skyline was now dominated by towers which gave no effort to integrate themselves with the existing landscape and walkways which lead to nowhere. The greatest insult of all being that all this was done for his own personal gain. By hiring his own firms and those of his friends with government contracts, Smith made money hand-over-fist through underhand deals and unethical accounting. His contact with the notoriously corrupt architect John Poulson only implicates him further. He lied to the people of Newcastle for his own gain and his 1974 jail sentence of only 6 years was criminally short. He may have claimed he was a socialist, but when the money was in front of him he preferred to line his own pockets.

    Guilty or Not Guilty?

    So which narrative is correct? At this current stage, this is an unsolved mystery. You know as well as I. At the time the general opinion in the public and the media was that he was guilty. Having previously been a media darling, Smith found his popularity turned against him. However, over time that opinion has slowly began to change, and recently a book was published which proclaimed smith ‘not guilty as charged’. The case is far from closed. Is it not fanciful to believe that smith was framed by MI5 as he claims? But is it not also naïve to dismiss this claim, given we known this was a scandal which went deep into the heart of government?

    His guilt is not the only question however. How about his legacy? Built corruptly or not, were Smith’s modernisations the right thing to do for the city? It’s hard to argue that the grey blocks as they stand today are particularly aesthetic, but it wasn’t Smith that dictated the architectural style of the time, these buildings were built everywhere. It’s also true that Smith never got to finish his vision for the city, would the whole network have worked better if it had been completed?

    There are so many more questions to answer and mysteries to unravel. For example: why did Smith plead guilty in court but protest before and after that he was innocent? Answers are intended to be found! Part 2 will be my assessment of all the material produced about Smith thus far, and my argument for why there’s still more to be done.

    Author/Publisher: Louis Lorenzo

    First Published: 21st of May 2018

    Last Modified: 22nd of May 2018 (Grammar Corrections)

    If you’re interested in learning more you can watch this fascinating documentary from Amber Films.

  • The Effect of The Civil Rights Movement on United States Foreign Policy

    On the 17th of May 1954, the United States Supreme Court concluded a landmark case that would bring to the fore a national movement which would last for nearly 15 years. The case, known as ‘Brown v. Board of Education’ ruled for desegregation in schools nationwide, calling on the 14th Amendment in espousing “separate but equal”. Yet, we must question how it is that this specifically national movement had such great international consequence, defining the way the United States (US) conducted itself on the world stage. I argue that the civil rights movement affected US policy towards the newly independent nations within Africa and Asia, and towards the rest of the world positively, in the context of cold war propaganda. I also point to the idea of an internationalist policy resulting from World War 2 which created an internationalist environment for change. However, overall, I identify that it was the nationalist civil rights movement and subsequent backlash which was the greater driver of US foreign policy in this period.

    The internationalist approach to civil rights was influential within the movement itself, within public perception, and on domestic policy, but was not as influential on US foreign policy as the nationalist approach. Martin Luther King placed the civil rights movement squarely in a global context. In texts such as ‘The Ethical Demands for Integration’ it becomes clear that King’s philosophy centred around the concept of a peaceful international understanding that all humans have intrinsic worth, and that changing policy will have negligible effect if you cannot change ‘hearts and minds’. King linked this internationalism with American patriotism when he alluded to Abraham Lincoln’s ‘Gettysburg’ Speech in his ‘I have a dream’ speech of 1963.

    Indeed, King was ultimately successful in changing those hearts and minds, as his peaceful protest endeared the freedom struggle to the American public. However, leaders such as King, Robert Moses, and Marcus Garvey were not influential in changing US foreign policy, although you can say they were instrumental in assisting it. In the context of the cold war, where the US and the Soviet Union (USSR) were vying for world influence, both countries wanted to be seen as internationalist. Therefore, the internationalists in the civil rights movement were useful in promoting the US image, especially in newly independent nations such as the Democratic Republic of the Congo (DR Congo), and in existing nations such as Ethiopia, which were receptive to the idea of a pan-African movement. However, the promotion of figures such as Louis Armstrong and Duke Ellington as ‘Jazz Ambassadors’ by the United States Information Agency (USIA), is exemplary of both the country’s desire to be perceived as internationalist and the superficial way it conducted foreign policy to achieve that goal. The assassination of Patrice Lumumba in 1961 has since revealed to historians the validity of claims that the US was not practising its preaching’s in terms of global freedoms. The internationalist civil rights activists were helping the US government maintain a veneer of an internationalist foreign policy without having to implement much change.

    Conversely the nationalist civil rights activists did cause meaningful change in US foreign policy. The reason for this was that, in this movement, the nationalists were a more militant group than the internationalists. Whereas King sought inspiration from those such as Gandhi and the Indian civil rights movement, figures like Malcolm X drew guidance from more violent protests. Malcolm X, as laid out in his 1963 speech ‘Message to the Grassroots’, believed that if change was needed, they would have to take it, using the French (1789), Russian (1917), and American (1776) revolutions as examples. Despite these international influences Malcolm X and others like Elijah Muhammad and the Black Panther Party (BPP) still led a nationalist approach, not seeking cooperation with those other countries, but attempting to mimic them in ethnocentricity. This was intensified by their movement being conflated with communism. Because Soviet propaganda focussed on America not having complete civil rights, some Americans’ response was to say that civil rights were anti-American. Thus, with accusations of being unpatriotic, and their militancy associating themselves with revolution, the militant nationalists had to ensure that they were seen to be patriots. Physical acts of rebellion, such as the 1964 Harlem riots or the 1965 Watts riots, were so influential on US foreign policy because they produced images which would be spun around the world and affect global opinion, all eyes were watching a country in turmoil. Thomas Jackson has shown particularly that the Kennedy administration was very concerned with this image, trying to get protestors “into the courts and out of the streets”. Indeed, it was the case privately, despite the public message, that the administration thought the whole affair “bad for the country”.

    Michael Klarman has discussed the notion that in fact it was not even the nationalist civil rights movement that had the greatest impact on US foreign policy, but was instead the nationalist backlash towards it. This is because this backlash was tied into several other issues including abortion, the death penalty, and same-sex marriages. This gave the impression of more than just rebellion, it was beginning to look like a repeat of the civil war. It was the “everyday racism of any white person”, as Thomas Borstelmann explains, which was the most problematic. Borstelmann explores the legacy of the Jim Crow laws that hung over American life and were called its “Achilles’ heel before the world” by senator Henry Lodge. He highlights also how America’s opposition to European colonisation, justified partially on racial grounds, forced its hand in adopting a more interventionist foreign policy. Feeling it must now enforce that vision of post-war anti-colonialism around the world, America then intervened in areas of proxy war, such as Vietnam, Iran, South Africa, and Guatemala. Additionally, America’s words of equality stated during World War 2 rendered any endorsement, nationally or internationally, of discrimination, contradictory, as Mary L. Dudziak explains in her preeminent work in this field. She also talks, along with Brenda Gayle Plummer in her text, about the key role of the media in foreign policy. Dudziak particularly notes the role of the foreign media, in that civil rights activists may have “manipulated” such sources, knowing the US government would be reading, to be particularly critical of discriminatory practice.

    It is interesting to observe that in many ways the nationalist and internationalist branches of the civil rights movement had the opposite effects on US foreign policy than what they intended. The internationalists were the ones who achieved domestic change by winning over the American public, whereas, the nationalists, by tarnishing the American image, incited a foreign policy that promoted an internationalist agenda.

    Author/Publisher: Louis Lorenzo

    First Published: 09th of September 2017

    Last Modified: 09th of September 2017

  • The Generalising Crisis of the 17th Century

    ‘The mid-seventeenth century experienced a “general crisis” in which a wave of economic, social and political upheavals swept over many parts of the northern hemisphere’ (Parker, 2001, p.20)

    To accurately assess this statement, it is necessary to first define the term ‘general crisis’. For the purposes of this article the definition used herein has been devised from a plain amalgam of ‘crisis’ and ‘general’. Crisis being defined as ‘a decisive stage in the progress of anything… applied esp. to times of difficulty’ and general as ‘approximately universal within implied limits’. Therefore, we define ‘general crisis’ as:

    “A period of decisive change that affects a significant proportion of the global population, commonly caused by times of difficulty.”

    By this definition, the ‘general crisis’ theory is not controversial, it is commonplace. Difficulty resulting in change is what you may simply term, “history”, and by extension, historiography itself, the study of change. Certainly the 17th century is no exception to this rule and Parker’s own contribution illustrates this convincingly. His figures on state-breakdowns, popular revolts, wars, and mortality rates are both global and numerous and his use of historiographical metrics gives a clear framework for what is meant by ‘general crisis’. Indeed, this clarity is what makes Parker’s argument superior to the efforts of many who preceded him, including the efforts of two highly influential crisis historians; Eric Hobsbawm and Hugh Trevor-Roper. Hobsbawm’s crisis emphasises trade depression and methods of production whereas Trevor-Roper’s describes a societal and political crisis based on unwieldy bureaucracy. These are fine premises but lack precision, never fully circumscribing ‘general crisis’ in the manner Parker manages. Their lack of empirical clarity has been widely criticised, particularly Trevor-Roper, who was heavily belied in a responding review. Therein, J.H Hexter asserts that ‘Trevor-Roper paints his picture… with such bold strokes and so broad a brush that he occasionally obscures rather than clarifies’.

    The Problem with The Crisis Theory

    However, no matter if we accept the crisis theory, we cannot avoid the debates’ wider issue for which I conclude that the term must be abandoned. Why do we find so many articles on the crisis that begin with a definition rather than a proposition? It is because the ‘crisis’ is no one instance, it is a theoretical grouping of events, the problem being that these events aren’t known quantities. What counts as the ‘general crisis’? Everybody will inevitably reach unique conclusions on this matter because there are no boundaries to choosing what constitutes ‘general crisis’. As we have seen, historians may include whatever they please.

    This results in a debate which keeps returning to definitional matters, an intellectual vacuum in which agreement is unattainable as we continue to talk at cross purposes. The ‘general crisis’ debate is therefore detracting from other possible 17th century discussion, generalising all debate on the period under the uncertain theme of ‘crisis’. To learn further from the 17th century then, we must change the game, because currently we’re all playing by different rules.

    The 17th century is unique in befalling this ‘generalising crisis’ because it lacks identity. The century has ‘Renaissance and Reformation on the one side, Enlightenment and Revolution on the other’, appearing inconspicuous in comparison to its neighbours. Naturally, we wish to seek what defines the 17th century as well. What the ‘general crisis’ theory did, and why it became so popular, was give the 17th century an identity. Unfortunately, this identity was not one that grew from the history, but was imposed upon it. Eric Hobsbawm devised the crisis to suit a Marxist historiography, the purpose always being to “show” a transition from a feudal to a capitalist economy. Such histories encourage cherry picking evidence to serve a conformation bias. This may have been inconsequential had it remained within Marxist historiography, but its subsequent envelopment of 17th century writing has led to an eternally skewed approach to the century, we lack the balance of differing approaches. The reality has become that we are writing within a framework that was purposefully designed to limit our scope.

    That said, this does not mean we may simply discount all generalist historiography. Contrariwise, most of this work is valid in identifying important 17th century occurrences and their origins. The problem lies only in how these events are then interpreted as we attempt to integrate them into a flawed theorem.

    New Approaches to the 17th Century

    Understanding this, we can examine further these ‘economic, social and political upheavals’ and explore several ways in how we can better understand the 17th century without this ‘generalising crisis’ obscuration.

    Looking through prior crisis articles we find the important economic events of the 17th century to be: the trade depression of the 1620s to 1650s, the “tulipomania” of the 1630s, the economic independence of colonies, the multiplication of costly offices, and the thirty years’ war, due to estimates being as high as 50% for national expenditure upon it. A generalist historian must value these factors on how they conform to their crisis thesis, but we can assess them purely on individual merit.

    To begin, we consider how these are exclusively European occurrences, not global. Further to this, we notice that these problems stem categorically from imperialist nations. The ‘tulipomania’ contained to a Dutch economic bubble, the expanding office bureaucracy concentrated within Britain. An increased independence of European colonies factoring toward a European trade depression. The thirty years’ war being motivated by imperialism as much as devoutness. Abandoning a generalist perspective reveals not a ‘Crisis of the European Economy’ generically, but an issue within European empire specifically. We see the economic costs of maintaining empire; particularly how increased economic independence of colonies leads to a long-term desire for political independence, as with North America. We also see a short-term impact of economic depression, all of which can be exacerbated by large-scale war, as with the thirty years’ war.

    Hobsbawm’s article comes almost to this conclusion, describing how ‘large and expanding markets’ brought economic downturn, but his focus on ‘crisis’ leads him toward the irrational conclusion of ‘transition from a feudal to a capitalist economy’. He misses the angle on empire, the maintenance of it, and the effects of war upon it. It becomes evident that allowing economic independence for colonies, especially during wartime, can hurt your economy even more greatly than may first appear. For Britain particularly, this helps us understand the economic roots of the decline of empire in the 20th century. Britain may well have handled its colonies with greater tact during the first and second world wars had it been aware of the consequences that may occur from neglect during wartime. As it was, we see events such as the Bengal famine of 1943 leading to an increased desire for political independence in colonies, one that felt additionally earned due to their efforts in the world wars.

    Politically, the 17th century was defined by a weakening of nobility as a result of a consolidation of power toward the crown as well as an expanding bureaucracy, as before mentioned in economic terms. Despite seeing this across the ‘northern hemisphere’ in both the east and west, and contrary to a generalist view, we only find political crisis on account of this in the west. Quite adversely, the Tokugawa period in Japan (1600-1868) and the Ming dynasty in China (1364-1644) have been determined to be unusually stable leaderships, the Ming dynasty especially, described as ‘one of the greatest eras of orderly government and social stability in human history’. What can we learn from this? We learn that political events cannot be a central factor toward causing crisis. The multitudinous revolts, breakdowns and wars cannot have been politically driven, at least in majority, otherwise we would have seen greater political instability in the east than we do. However, we do still see a degree of volatility in the east, an example being the Shimabara rebellion of 1637-8, so we must search for other factors that may have caused this.

    Social factors haven’t been explored as thoroughly as others when it comes to the general crisis. This is most likely due to the locality of social trends of the time. Very unlike the globality of politics and economy in the 17th century, social movements were far more localised, with only a handful of countries having postal services. Thus, social factors don’t fit well into the ‘general crisis’, because they weren’t very general. However, it is still clear to see the dominating social force in Europe in the 17th century: Religion. The thirty years’ war is a prime example of this but we can also point to the rise of religious polemics like Jonathan Swift, the glorious revolution, the eighty years’ war, and the abolition of the edict of Nantes. Here we do spot a consistency with the east, despite rebellions being rare the Shimabara rebellion of 1637-8 was directly a result of religious tension. Additionally, this was a European-exported tension as the conflict was between the Tokugawa Shogunate and the Roman Catholics. Also in the east, we see the rise of the Sabbatean movement. One conclusion here may be that Europe managed to export its religious troubles to the east and this is why we see some social parallel between the two continents.

    Centrally, this article explores the benefits of approaching the 17th century without thinking of ‘general crisis’. There is a far richer history to be found if you are prepared to cast the century in different lights, the surface of which we have not even scratched in this article. You may disagree with some, or many, of my conclusions, and that’s great. There is no lens through which you must view the 17th century, approach it in a manner which is logical to your interests.

    Author / Publisher: Louis Lorenzo

    First Published: 05th of April 2017

    Last Modified: 05th of April 2017

  • The 17th Century Economic and Political Crises Compared

    The problem with the debate around ‘the general crisis of the 17th century’ arose as soon as the phrase was put to paper, it’s the same problem that plagues debates over ‘brexit’, the central term is nebulous. Historians must decide for themselves what ‘general’ refers to, which causes great conflict like that we see between Hobsbawm and Trevor-Roper. It’s inevitable you will produce different paintings when you aren’t working from the same palette. We will hardly, therefore, draw fair conclusions by simply comparing the two articles, we must look for more useful approaches to this debate. Historians such as Roger B. Merriman have answered this by drawing the conclusion that the crisis did not exist, if there is no central understanding of its existence, how can it? However, this, although logical, is side-stepping the problem rather than tackling it, invalidating all previous discussion is counter-productive. Contributing constructively will be to accept the existence of a general crisis but to create a definitive core idea of what the crisis consisted of. This will allow fair debate and comparative historical writing on this topic. How will we achieve this? The answer is simple; by studying articles on the crisis and finding the key events that bind them together. Hobsbawm and Trevor-Roper are excellent initial candidates for this as they approach the debate from such antithetical positions that finding commonalities in their arguments is certain to reveal the core of this crisis.

    Hobsbawm’s Marxist interpretation is that the crisis is centrally economic whereas Trevor-Roper regards the crisis as primarily political, his viewpoint is concertedly ‘anti-Marxist’. These are such disparate standings that their articles seem bound to draw diametrically opposite conclusions. Indeed, Trevor-Roper’s article is even antagonistic towards the Marxist interpretation. However, their articles are not so divergent, in fact, they complement each other on the import of several key areas. These are: the expansion of empire in the 1500s, the multiplication of crown ‘offices’, the 1620 ‘decay of trade’ and the 30 years’ war (1618-48). These are our first key commonalities that we can use to definitively construe this crisis. So why, if the two articles agree on so much, would Trevor-Roper preface his article with a critique of Hobsbawm’s? Directly put, Trevor-Roper vehemently opposed how Hobsbawm utilised the crisis to justify the Marxist idea of inevitable progression from feudalism to capitalism. This anti-Marxism even went so far as to cause private hostility between Trevor-Roper and Hobsbawm. We can therefore safely write off Trevor-Roper’s initial attack on Hobsbawm as too personally motivated to be taken seriously as a critique. We can focus on what’s important accordingly; the arguments that the two historians forward regarding ‘the general crisis of the 17th century’.

    So how do they agree as wholly as is being implied if they attribute the crisis to different causes? Contrary to first impressions, these two lines of argument are not incompatible but work in tandem, there was both political and economic (alongside social and cultural) attributes to the crisis. Indeed, Hobsbawm himself recognises this commonality of purpose in a following article, remarking that their views “are complementary rather than competitive.”.  As is always the case with history there is a web of causation and no single factor lies withstanding from all others. The two historians are, of course, aware of this but in arguing the significance of their own chosen factor they have overlooked the importance of others.

    Both argue the importance of the expansion of empire during the 1500s. Hobsbawm describes it as “large and expanding markets… of the later 15th and 16th centuries” that had reached “the limits… of feudal or agrarian society”, and now, “when (they) encountered them, (they) entered a period of crisis”. These “limits” that Hobsbawm refers to are the limits of a feudal/manorial society that had little need for trade. In new, large empires, and with movements like the agricultural revolution which was facilitating trade, feudalism was proving ineffective and causing crisis as countries struggled to maintain the trade levels required to sustain themselves. Feudalism wasn’t an aggressive enough system, unlike the more competitive systems that would replace it. Trevor-Roper argues the same point: “The expansion of Europe (created) greater markets” and these “vast new empires (were) vaster than they (could) contain for long without internal change”. This is the same economic argument that Hobsbawm contends; that the economies of these countries had grown too large to be supported by a feudal system of commerce.

    Trevor-Roper furthers this economic point by adding a parallel political factor: “The political structures of Europe are not changed in the sixteenth century: they are stretched to grasp and hold new empires”. By ‘stretching’ he means a “multiplication of ever more costly offices (that) outran the needs of state”. The crowns of Europe were selling bureaucratic ‘offices’ in abundance and were letting the country pick up most of the cost, in Britain, 75% fell on the country: “this was an indirect, if also a cumbrous and exasperating way of taxing the country” Trevor-Roper argues “So ‘the Renaissance State’ consisted, at the bottom, of an ever-expanding bureaucracy which… had by the end of the sixteenth century become a parasitic bureaucracy.” These economic and political arguments are closely related; as the economy begins to hit its “limits” at the end of the 16th century it is pushing the crown to expand a “parasitic” bureaucracy which makes money for them in the short term. However, longer term it is further damaging the economy as the superfluous expenses continue to increase.

    Another significant factor the two historians point to is the universal depression of 1620, what Trevor-Roper refers to as the “decay of trade”. Hobsbawm describes it as “a general balance of rising and declining trade (that) would produce export figures which did not rise significantly between 1620 and 1660”. As you may imagine, this caused recession in the new societies of trade and empire and was caused by the debasement of currency in the early 1600s. This was the end of great economic expansion and it brought the frivolous expenditure of the 16th century into sharp clarity. The rise of puritanism during this time clearly shows that people were sick of the “gilded merry-go-round”, this is when the weight of the crown offices and the limits of feudalism began to show themselves, having been previously masked by a boom economy.  

    Both also concur over the significance of the 30 years’ war. Trevor-Roper arguing that the war “undoubtedly prepared the groundwork for revolution” and Hobsbawm that it “intensified the crisis”. The war made the illnesses of Europe’s economy acute, estimates are as high as 50% for national expenditure on the war, from all parties. It’s also important in how it diminished the influence of the pope, allowing puritanism to spread at an unusually fast rate and animosity toward traditional structures of power with it. It’s noteworthy that both historians don’t place as much emphasis on the war as you might expect. They present it as a more minor force that furthered the ‘greater’ change brought by political and economic factors. This can certainly be attributed to the fact that it neither falls strictly under economic or political history, but military history. As such the effects of the war in causing the general crisis are somewhat overlooked. However, the fact that it’s in both articles yet in neither historians’ chosen field proves its significance.

    In the interests of brevity, we will not discuss factors that Hobsbawm and Trevor-Roper have not spoken on: the expansion of the middle class, intellectual revolution, famine, disease, and many others. What’s clear is that more research needs to be done into these factors that straddle the multitudinous accounts on the crisis. But here we have the start, it’s not so that Hobsbawm and Trevor-Roper disagree, whatever their own thoughts on the matter. The political and the economic crises, specifically linked to the expansion of empire and the multiplication of ‘offices’, are two sides of the same coin and two of these ‘base factors’ that can be used to build a cohesive understanding on this crisis. They work alongside factors such as the decay of trade and the 30 years’ war to create separate crises in different countries which produce a sum-total of general crisis in the 17th century.

    Author / Publisher: Louis Lorenzo

    First Published: 05th of April 2017

    Last Modified: 15th of May 2017 (Grammar Corrections)