Neoliberalism and the commodification of healthcare in England

by Rachil Emmanouel

NHS staff protesting

The NHS, which this year celebrated its 70th anniversary, was described by its founding father, Aneurin Bevan as “the most civilised step any country has taken”. It emerged as one of the pillars of the welfare state in the aftermath of the Second World War, in an era when social democratic sentiments and demands for a more egalitarian distribution of resources were captivating the nation. Today, the NHS remains an object of veneration and arguably the most cherished institution in the country. It offers universal, comprehensive and free healthcare from cradle to grave and has liberated citizens from the financial burdens of illness that were characteristic of the pre-war era. However, unbeknown to large swathes of the general public, the NHS has, over the past 30 years, been subjected to a series of reforms that threaten its founding principles. These reforms have been driven by neoliberal ideology and health industry lobbying, with the aim of transforming healthcare from a public good into a profitable commodity. The incremental changes that have been introduced by successive governments are often packaged and sold to the public under more palatable terms, with which we have all become too familiar: efficiency, patient-choice and modernisation. On closer inspection, however, the reality which emerges is characterised by unprecedented cuts, job losses and the liberation of the NHS budget to corporate sectors which are unaccountable and place profit above people’s wellbeing.

The basis of neoliberalism

In response to the global economic stagnation in the 1970s, the world witnessed an emphatic turn towards the adoption of neoliberal principles spearheaded by the Thatcher and Reagan regimes in Britain and the United States (Harvey, 2007). In essence, the central features of neoliberal ideology promote the shrinking of the state, privatisation of public services and industries, and deregulation of banks. In Britain this has translated into the gradual erosion of public institutions which were previously regarded as being immune from private profiteering such as education, public housing and indeed healthcare. At its crux, the neoliberal position favours market-led over state-led approaches. The underpinning assumption is that state-run institutions are inadequate, inefficient, bureaucratic and of a low standard. Private sector involvement, on the other hand, is highly regarded as being efficient and innovative. Under neoliberal thought, business logistics and corporate management are to be adopted by the rudimentary public sector in order to generate efficiency and results.

Marketising the NHS

Neoliberal rhetoric has seeped into policy making pertaining to the NHS. Successive governments, both Labour and Conservative, have adopted policies favouring the restructuring of the organisation into a market-model and the shrinking of the state through austerity measures (Leys, 2017). The process of marketisation has been insidious and began with the creation of the internal market in the 1980s under John Major. This was a system of NHS hospital trusts (providers of services) and Primary Care Trusts (purchasers of services) where hospitals compete to secure business. To fit a business model, their income was related to performance by the introduction of payment by results, whereby every completed treatment was assigned a fixed price according to its cost and risk. Despite assertions that the marketization project would reduce bureaucracy and costs, the reforms have in fact achieved the opposite. A 2005 study found that £10 billion, which is 10% of the NHS budget, is spent annually on running the internal market (Bloor et al, 2005). The costs are accrued from the billing of treatments, formulation of contracts, litigation and the salaries of an ever-increasing number of senior managers, lawyers, HR and IT staff recruited to run the market’s infrastructure.

Under New Labour the NHS Plan 2000 and NHS Improvement Plan 2004 served to open up this internal market to commercial interests as the private sector was allowed to enter the bid for NHS contracts and compete with NHS hospital trusts. Again, the reforms were premised on the notion that these Public-Private-Partnerships would promote innovation, efficiency and ease pressure on cash-strapped NHS trusts. The reality however has turned out rather differently. Schemes such as the Independent Sector Treatment Centres (ISTCs) have seen private companies, equipped with financial and legal might, outcompeting NHS trusts for high-volume, low risk, lucrative NHS contracts such as cataract removals and hip replacements. Meanwhile, NHS trusts are left with the double-burden of losing out on high-income procedures whilst being left to pay for less-profitable ones, such as treatments in emergency care (El-Gingihy, 2018). Furthermore, findings by the British Medical Association reveal that, in an attempt to attract private investment ISTC contracts cost on average 12% more than NHS tariffs and private providers are often paid in advance for a pre-determined number of cases, regardless of whether or not treatments are actually completed (Ruane, 2016). One astonishing example is that of the private contractor, Netcare, which performed a meagre 40% of contracted procedures, whilst receiving £35 million for patients it never treated (Pollock, 2014). Although clearly fraudulent, such anecdotes hardly come as a surprise given that private companies are primarily accountable to their shareholders and have a duty to make profits.

The notion that a healthcare system can be run successfully under a business model is fundamentally flawed, namely because the private sector’s dogma of cost-cutting and efficiency is irreconcilable with good patient care. Unlike in other industries, the principle of efficiency cuts to improve productivity does not apply in the healthcare industry. Its reliance on highly-skilled labour and continual development of new treatments coupled with the realities of an increasing demand for healthcare services from an ageing population means that it cannot fit the business model, which strives for increased output with reduced investment. Adherence to the ‘efficiency’ principle within the NHS has underpinned decisions to reduce staffing-levels across trusts, restrict access to health services, make pay cuts and compromises in quality of care.

Perhaps one of the worst examples of this comes from the catastrophic failure at Hinchingbrooke Hospital in Cambridgeshire, the first NHS hospital to be transferred to a private management firm (Cooper, 2015). Circle Health won the contract having promised to use its business acumen to improve the hospital’s performance but the reality stood in stark contrast to these pledges. Between 2012 to 2015, the hospital’s deficit doubled. In an effort to curb costs, Circle Health’s management hollowed out quality of care to the point where the CQC put the hospital under ‘special measures’ for its inadequate safety performance. After just three years, the firm abandoned the contract leaving the hospital £9 million in deficit, which excluded additional litigation costs – all of which were to be covered by the NHS. The reality of private sector involvement in the NHS has thus been the privatisation of profits and socialisation of harm, whereby private firms are not held accountable for their failures and the British taxpayer is ultimately made to rectify the consequences.

Privatising politics

Despite the failures of the marketisation project in the NHS, the British government today continues to award large contracts to the private healthcare industry whilst simultaneously imposing austerity measures on the NHS budget. Meanwhile, recent data from the British Social Attitudes research centre has revealed that 61% of people were willing to pay more tax to fund the health service (The King’s Fund, 2018). What is clear is that the neoliberal reforms of the NHS over the past several decades have been passed without popular support or a democratic mandate. Colin Leys, co-author of The Plot Against the NHS, argues that changes have been made covertly with their true intentions deliberately concealed (Leys and Player, 2011). How this gradual dismantling of the NHS, and indeed other public assets, has been achieved points to the wider issue of how the democratic process has been hijacked in the neoliberal age. Implicated in this process is the media and its role in propagating and failing to critically challenge government spin. More crucially, British politics itself has been privatised and bent towards the interests of private corporations. There exists between the government and the private health industry a ‘revolving door’, which serves as a means by which politicians can leave office and become well-paid lobbyists for large corporations and vice versa (Cave & Rowell, 2014). For example, Simon Stevens, the current Chief Executive of NHS England and architect of many recent NHS reforms, was also the former president of United Health Global – one of the biggest American private insurers, which has been implicated in countless scandals and lawsuits. Political decisions are mired by conflicts of interest as politicians’ loyalties lie not with the general public but their financiers. The stronghold private corporations possess over the political process has also been demonstrated in cases where the government has acted against private sector interests. In a recent example, Virgin successfully sued the NHS £2 million as compensation for not being awarded a contract to deliver care in Surrey (NHE, 2018). In this sense, it can be seen how the private sector adopts a carrot-and-stick approach to securing its interests.

Neoliberalism: a means to restore class power?

David Harvey (2007) asserts that neoliberalism has been a “project to restore class dominance to sectors that saw their fortunes threatened by the ascent of social democratic endeavours in the aftermath of the Second World War”. Using only the example of Britain’s National Health Service and how it has been subjected to a series of neoliberal reforms that have led to the increasing commodification of healthcare, Harvey’s position appears rather plausible. The British public’s unwavering desire to maintain the NHS as a universal service, free at the point of use, is perhaps the most powerful bastion against an extensive, US-style private healthcare system. As Aneurin Bevan himself once wrote, the NHS will last as long as there are folk to fight for it.

Brain drain or brain gain? A case for the Nigerian medical diaspora

By Emmanuella Togun

Picture2

Imagine you are a health worker in a resource-limited country where you feel overworked and underpaid. You are aware that your colleagues abroad work in more conducive environments with better technologies, opportunities, incomes and outcomes. If the chance came to switch places, would you take it? This offer is one that many contemplate in Nigeria, where there has been a massive exodus of health workers in pursuit of ‘greener pastures’. Brain drain, as this phenomenon has come to be known, is the process by which a country loses its most educated and talented workers to more favourable geographic, economic, or professional environments in other countries (Adeloye et al., 2017). This post will examine the implications of the brain drain for Nigeria and will argue that the diaspora should rather be considered as ‘brain gain’ for their source countries, and their exposures practising abroad used to drive development at home.

Nigeria is the most populous African nation with 197.5 million people and has grown by more than 40 million people in the last decade. With an annual population growth rate of 2.5%, it is predicted to become the world’s third most populous nation by 2050 (Hagopian et al., 2005; Adeloye et al., 2017; World bank, 2017). However, Nigeria has only 72,000 physicians and half of its surgical workforce is practicing abroad, making it one of 57 countries in the world with a severe health worker crisis. Healthcare spending stands at 4.3% of the national budget, a far cry from the recommended 15% (WHO, 2015; Adeloye et al., 2017).  Recently the Nigerian government has spoken fervently about the impact of brain drain on healthcare development and has, on numerous occasions, appealed for the return of overseas talent (Adetayo, 2017 ; Aminu, 2018; Adebowale, 2018; Vanguard, 2018). In light of the nation’s rapidly growing population and health needs, the issue of brain drain has been identified as a key barrier to the progress of the health sector.

The brain drain is not a process that started overnight. In fact, the movement of physicians from developing to developed countries has been on the rise for over 50 years (Astor et al., 2005), and development planners have been drawing attention to the massive shift of labour from the ‘periphery’ to the ‘core’ since the late 1960s (Odunsi, 1996). The USA for example, recruits numerous health workers from developing countries to bridge its healthcare workforce gaps, especially for its rural areas. Notably, 40% of physicians in the USA that emigrated from Sub-Saharan Africa were trained in Nigeria (Hagander et al., 2013; Astor et al., 2005).

Logan (1987) identified several common characteristics of the major exporters of skilled labour. They were English speaking and with large populations, colonized histories and established institutions of higher education. Nigeria, a nation possessing all of these features, can trace its history of brain drain back to the early post-colonial era, after gaining independence in 1960. When the British colonized, they established schools in Nigeria to emulate the standards of staffing, technology and research funding enjoyed in Britain. This made world-class training and practise conditions accessible to locals (Arnold, 2011).

However, national independence led to a disruption of access to high quality training opportunities. Healthcare standards and spending fell due to shifts in priorities of new governments, corruption and externally imposed structural adjustment programs, all of which hindered efforts of newly formed independent states to develop their own national infrastructure (Arnold, 2011). Skilled practitioners trained to British standards could no longer earn and practice to the quality they were used to and opted to move to other countries like the UK (Arnold, 2011). The trend continues, and an estimated 2500 doctors are estimated to migrate each year (Adeloye et al., 2017).

The most common triggers for migrants are the desire for a higher income, improved working and living environments and research opportunities (Astor et al., 2005; Hangander et al., 2013). A random survey of Nigerian professionals in the USA showed that housing difficulties, unemployment and underemployment were also major reasons for migration (Odunsi, 1996). Additionally, other factors like unstable socio-political and economic conditions like the inability to absorb human resources, a hostile economic climate, political unrest and other professional pessimisms have also been identified as important push factors (Adeloye et al, 2017; Odunsi, 1996; Stilwell et al., 2004). These socioeconomic and political factors were also the greatest determinants of whether a migrant stayed permanently in the host country (Hagander et al., 2013).

Furthermore, there is evidence to show that Nigerian doctors practising locally are not adequately employed or managed. For example, job dissatisfaction, unpaid salaries and poor working conditions have instigated numerous labour strikes by members of the Nigerian Medical Association (Akinyemi and Atilola, 2012). A study by Thomas (2008) further reveals that although returning migrants have greater chances of employment than non-migrants, in a home country that is economically weak and has a high unemployment rate, returning workers are less likely to be gainfully employed; many therefore re-emigrate. Structural theories on return migration emphasize the importance of socioeconomic and political factors in affecting the ability of return migrants to be properly re-absorbed (Thomas, 2008).

Picture1
Nigerian doctors protesting unpaid salaries

These findings necessitate examining the implications of brain drain. While counting losses caused by brain drain, it is important to consider the massive economic losses a country endures by subsidizing medical education with the hopes that physicians will serve and pay taxes in the country upon graduation. These costs cannot be recovered when the physicians migrate (Hagopian et al., 2005). The big question here is, where the line should be drawn between an individual physician’s right to choose where to practice on the one hand, and the right of the population to get the best quality healthcare on the other, and possibly the government to get return on investments.

In the case of government cost recovery, it could be argued that the government still gets return of investment indirectly through remittances (Adeloye et al., 2017). In 2017, about $22 billion was sent home from Nigerians abroad, adding up to about a quarter of the country’s oil export earnings (World Bank, 2017). This is a significant contribution to the nation’s economy. However, it could be argued that the greatest loss is not in the quantity leaving, but the quality of minds lost. It is usually the most ambitious and talented health workers that leave, denying the populace of the best quality medical services (Benedict and Okpere, 2012; Stilwell et al., 2004).

In attempts by countries to retain or bring back their skilled workforce, policy options that have successfully worked include income adjustment and the improvement of working conditions, as has been demonstrated in Thailand and Ireland (Astor et al., 2005). Others have involved incentives like the provision of housing, training opportunities, study leave, mentoring and feedback (Stilwell et al., 2004). Since the main incentive to emigrate is the prospect of a substantially higher income, these options may not be the most sustainable courses of action for a country like Nigeria, which cannot afford to raise and maintain physician income comparable to what physicians may receive abroad. Furthermore, if unemployment and underemployment are still issues in Nigeria, the question arises as to whether it is in the government’s best interest to appeal to exported talents when there is a shortage of suitable jobs and remunerations for them on return. The next best solution is to explore how Nigeria can gain from its pool of international talent in more ways than remittances in what we can call brain gain.

Brain gain involves the remote mobilisation of skilled workers abroad and involving them in programmes at home (Meyer et al., 1997). The citizen abroad is not mandated to return home but to contribute to home development initiatives remotely. This may involve activities like transfer of knowledge by training and mentoring, research and innovation, collaboration through short-term projects, patient or system level consultations and even donations. It has been shown to work in different ways and settings. This means that the practitioners basic human right to choose where to practice is not compromised, while they are still able to contribute to national development in their home country, regardless of where they choose to reside.

India, for instance, added an ‘Indians abroad’ database to the National Register for Scientific and Technical Personnel. The aim was to gather information on diaspora Indian professionals and, with the Council for Scientific and Industrial Research, offer short term appointments and opportunities to be visiting scientists and research associates. The collaboration and transfer of knowledge between professionals’ home and abroad encouraged development (Meyer et al., 1997). Also, Indian medical diaspora organizations like the American Association of Physicians of Indian Origin have been involved in knowledge and technology transfer to India by forming transnational links between Indian medical institutions and those in high-income countries. They also hold educational and conference visits, forge scientific and professional partnerships and give donations which have greatly improved the Indian health system (Sriram et al., 2018). Countries like Colombia have also successfully adopted similar strategies to benefit from their diaspora talent, (Meyer et al., 1997).

A study by Nwadiuko et al. (2016) on USA-based Nigerian physicians showed that the desire to re-emigrate was almost directly proportional to the person’s current involvement with the Nigerian health system, measured by donations and number of medical service trips home. As the Indian experience – where the government is involved in engaging diaspora – demonstrates,  government involvement on a regulatory basis (at the very least) is essential for this to work effectively and in a coordinated manner in Nigeria.

The Nigerian health system suffers from years of underinvestment which has caused the neglect of healthcare infrastructure, research and wages for healthcare workers (Adeloye et al., 2017). Health workers in the diaspora should not shoulder the blame for this. Nigeria has a robust supply of medical diaspora who are willing to contribute to its health systems development, and which it must engage in its development efforts. Exploring this may be the beginning of the solutions for the developmental challenges faced in the Nigerian health sector, even if longer lasting solutions to the problem will require greater investments in health systems.

Colonial Caste, Census and the Bifurcation of State

by Aditya Gangal

The British Empire had a profound impact on shaping the socio-political environment throughout its various colonies. The colonial legacy extends through politics to economics, education, infrastructure, and even religion. The Indian subcontinent represents one of the larger stages on which this impact played out, with a complex and nuanced interplay between the caste system (representing over 2000 years of Indian society and culture) and the institutions of formal rule by the British. This post analyses the applicability of Mahmood Mamdani’s theory of the ‘bifurcation of state’ (which Mamdani developed in relation to colonial legacies in Africa) to caste in Colonial India, as well as the way in which censuses were used as a systematic tool of classification and oppression under British rule. The census, a process of documenting a population that ‘within the colonial context […] [was] regarded as being vital to the maintenance of control’ (Christopher, 2008), was widely used in India and other colonies.

In his theory, Mamdani focused on European rule in Africa, describing it as two-tiered. Direct rule referred to such things as ‘appropriation of land, the destruction of a communal autonomy, and the defeat and dispersal of tribal populations’, essentially replacing native orders with European ones. By and large, this approach was adopted in predominantly urban areas. In rural areas, however, the principle of indirect rule prevailed. Per Mamdani, ‘the tribal leadership was either selectively reconstituted as the hierarchy of the local state or freshly imposed where none had existed, as in “stateless societies” […] for the subject population of natives, indirect rule signified a mediated – decentralized – despotism’. Therefore, the top tier of local society comprised those (mainly in urban areas) who were under direct influence of the colonisers. Meanwhile, the lower tier still answered to local authority, who in turn were under colonial control – a ‘bifurcation of state’ (Mamdani, 1996, pp. 16-18). Regardless of the local figureheads, it was evident that the colonisers were still very much in control of proceedings.

So how might Mamdani’s theory apply to India? Answering this question requires a basic understanding of the functioning of the caste system prior to the arrival of the British. A vast amount of literature has been written on the topic, with huge variations regarding the understanding and implementation of this system in different eras of pre-colonial history. Dumont, for example, emphasises the role of the system in separating society on several levels such as marriage, labour/professions and politics. He also highlights the differences between the traditional Western and Indian ideas of hierarchical society – indicating how religion had a key role to play in the Indian system (Dumont, 1966).

The first mention of caste in Indian society dates back to Vedic texts as long ago as 1100 BCE. These talk of four groups: the priests (Brahmins), the warriors (Kshatriyas), the traders/merchants (Vaishyas) and the servants (Shudras), together composing the system of ‘Varna’. This system was strongly linked to stories of deific creation. The different groups assumed different hierarchical roles throughout history, with the Brahmins traditionally being in positions of authority, through to the Shudras and below them the ‘untouchable’ caste, who were seldom mentioned in the literature nor given any significant role in society, being a historically disadvantaged group and at the bottom of the hierarchy (Heath, 2012).

Now let us return our focus to the British empire in India, and the uneasy amalgamation of these two very different approaches to social hierarchy. British rule in India can largely be divided into two phases. Before the Indian Rebellion of 1857, the fulcrum of the British presence in India was the East India Company, a commercial enterprise largely focused on plundering resources rather than governance (The Editors of Encyclopaedia Britannica, 2019). Already at this stage the bifurcation principle could be seen, with a key role played by the Brahmins. In the South, the fall of the Maratha Empire prior to the British arrival facilitated the proliferation of Brahmin rule. ‘It was […] the spread of Brahmins […] which the British made use of to establish their rule. Brahmins were mainly responsible for the British East-India company to come to prominence’ (Kavlekar, 1975, p. 76). During this time, there were also some attempts to try and understand the local hierarchical structures of society, often delving into pseudoscience such as anthropometry (the use of physical measures on the body to determine traits); however, no formal census or form of study had yet been conducted to ‘categorize’ and group individuals (Bates, 1995).

However, the Rebellion catalysed a change in focus; with the Queen taking a more active role in the organisation of the country as Empress of India in 1877. This added a ‘more personal note into the government and removed the unimaginative commercialism that had lingered’ (Agrawal, 2008, p. 11) under the direction of the East India Company in the Court of Directors. This went hand in hand with a further development of ‘indirect rule’ and a pronounced interest in understanding local ruling mechanisms, which manifested in two ways: the inclusion of Indians in parliament, and a commitment to understanding Indian civilisation and function (Dirks, 2001a). In fact, ‘the verdict was that interference had been the problem’ (with instituting effective rule), and Keene, a scholar in the Indian Civil Service at the time, believed that ‘successful British rule […] should be indirect as far as possible, but firm and conducive to loyalty.’ (Robb, 2007, p. 1696). From the British perspective, such an approach would enable the local rulers to continue their rule under the guise of being given ‘permission’ by the British to do so, with the latter maintaining their ‘right to depose native state rulers in case of “misrule”[…] [which] was exercised quite often and thus constituted a credible threat’ (Iyer et al., 2008 p.3). Thus, key to effective rule was a greater understanding of the Indian population and the caste system, including through the use of the census, as the following section will illustrate.

The commissioning of the census by the British served both to classify a vast unknown population, as well as a ruling instrument to position themselves at the top of the social hierarchy. How to stop female infanticide and other academic enquiries served as the guise under which the first Census of India was undertaken in 1865. However, the deductive (‘top-down’) approach used (whereby the groups on the questionnaires were pre-defined by the administrators rather than being open to self-definition) was a sign of the British desire to enforce a rigid social hierarchy and entrench their influence. It was clear from an early stage that this was a far too simplistic means of classifying people – a much earlier survey (which still used open-ended questioning) found 107 castes of Brahmins within a single city, for instance. This resulted in the alteration of further censuses such as the 1881 consensus.  It divided the population into Brahmans, Rajputs (a predominantly warrior caste) and a large group of ‘others’ encompassing 207 castes, with the only criterion being that each had at least 100,000 people. Further censuses upped the number even more, highlighting the complexities of the local demography. By 1891, the approach had shifted entirely, using a system composed varyingly of occupations and race amongst other categories, all still under the guise of the word ‘caste’ (Dirks, 2001b).

Despite these classification difficulties, the approach used to implement results of the later censuses was much the same as the earlier ones, evoking the ‘selective reconstitution’ of the local leadership seen in Mamdani’s theory. As such, the caste hierarchy was used for power and control. For one, the Criminal Tribes Act was passed in 1871, whereby certain tribes became typecast as criminal and disruptive. One result was male members of these tribes having to report to the local police station every week. Many of the ‘criminalised’ tribes were in fact those who had previously campaigned for greater rights against the British authorities (Simhadri, 1991). Moreover, the classification of an individual determined their employment opportunities – with only the Brahmins being eligible for positions of power (Szczepanski, 2018). People’s right to own land was also restricted based on their position within the Westernised interpretation of the caste system.

With a basic understanding of the caste system, and how the British tried to first understand and then exploit it we return to the question of whether Mamdani’s theory applies to colonial (and indeed modern) India? On the one hand, there are certainly elements of the literature that suggest we can. In particular, we see this with the maintenance of the ‘Princely states’ where local royals were afforded continued reign (provided they kept ‘in line’ with the British). Additionally, caste-based restrictions were imposed upon job opportunities, the right to own land, and so on, which clearly prioritised certain groups over others. ‘Tribal leadership’ in Mamdani’s writings is in many ways synonymous with the upper castes as both serve the goal of furthering colonial agendas, deepening and formalizing (potentially) previously non-existent divides.

The legacies of these and similar classifications are reverberating through India today. Relics of caste continue to penetrate into rural and urban life. This might be particularly apparent in ideas of ‘Scheduled Castes and Tribes’ and affirmative action policies designed to try and reverse some of the damage done (de Zwart, 2000). Even today, many relationships, professions and political structures  are still strongly influenced by caste. We see this with the story of Ovindra Pal, who despite holding a Master’s degree is confined to skinning animals, much like the generations of Untouchables before him (Rai 2016).

Through this post, we have seen the incredible complexity of the caste system in the context of India’s vast populace, as well as the British attempts to first understand it, and subsequently control it through the installation of a bifurcated state. The census created a formal means of systematisation of the Indian population, which was used to restrict group roles and positions in society, arguably to further the influence of the British rule. As Bates (1995, p. 30) rather succinctly states, ‘it would be a mistake to regard [caste-based colonial discourse] as solely the effect of […] ‘normalising’ the sociology of India’. Perhaps it is unsurprising that some sort of classification tool would be used in this context.  However, it was clearly much more than that; it was a tool by which the British instituted power, a way to rule rural populations by proxy through bifurcation of state, and ultimately a way to reshape the understanding and functioning of Indian society.

In Deep Water: ‘Developing’ Yemen and the Aqua Crisis

by Benedikt Boeck

Located on the Western edge of the monsoon zone, Yemen’s highlands are home to one of the oldest irrigated civilisations in the world. Throughout antiquity, its wealth – inextricably bound up with an agriculture founded on a myriad of mountain terraces, communal cooperation and intricate water harvesting techniques – has been the source of some of the greatest legends known to humankind. Tales of magnificent caravans dispatched by the Queen of Sheba to King Solomon are retold in Islamic, Christian and Jewish traditions and Arabia Felix, Flourishing Arabia, is no stranger to any student of Latin. Yet today, in the midst of a destructive civil war, Yemen is confronting a water crisis unprecedented in its history.

Yemen’s water shortage is absolute and deadly. Its annual water consumption of 3.5 billion m³ exceeds renewable water resources by 1.4 billion m³ and with a yearly population growth of 3 percent, the per capita availability of water has dropped to less than 85m³/year; significantly below the internationally recognised scarcity threshold of 1000m³ (Lackner, 2014). Generally, the areas experiencing the most severe water paucity are those with the highest population density and as ground water sources are expected to run dry within the next three decades (World Bank, 2010a), bringing to end the civil war is not the only challenge facing Yemen in the near future.

Already, the ramifications of climate change exacerbate uncertainty around water access and further destabilise Yemen’s precarious eco-system. Lackner (2017) illustrates how only 3% of the country’s surface are fit for agricultural purposes, with 60% of Yemen’s agriculture depending on rain-falls and the increasingly unpredictable rainy seasons from March-April and July-August. Yet, suffering progressively violent monsoon rains, Yemen struggles with the destruction of crops, erosion of fertile topsoil, the widening of wadis, hence further destruction of pastures, and the demolition of houses and infrastructure (Lackner, 2017). In 2015, for example, the monsoon Chapal brought wind speeds of >120km/h, 610mm of rain within 48 hours – seven times the annual average – and displaced 45.000 people (Lackner 2017). Moreover, 3-5% of arable land is lost annually due to desertification which, combined with worsening periods of drought, facilitate wind erosion of rich soil (World Bank, 2012b).

When exploring the roots of Yemen’s water crisis and its vulnerability to climate change, one very quickly discovers that its causes are almost exclusively man-made, with mismanaged agriculture, purblind international development efforts and poor local governance all featuring prominently. And while managing the manifestations of climate change is now largely beyond human control, and perhaps always has been, one is left to wonder whether the maintenance of traditional – yet quickly deserted in the wake of ‘modernising’ development programmes – approaches to farming could have been able to cushion at least some of the repercussions of global warming (Blumi, 2018).

While Yemen’s agriculture was largely sustainable until the early 1970s, it now monopolises 90% of the country’s annual water resources as irrigated areas have expanded tenfold between 1970 and 2004 (Lichtenthäler, 2014). This growth might well be a direct result of globalisation and Yemen’s integration into the capitalist global economy. On the one hand, the 1980s saw a mass exodus of rural Yemeni men to the Gulf states in order to find a better life and lucrative work there. Their absence often resulted in traditional irrigation networks falling into disrepair and, since remittances often flowed into uncontrolled, vast and profitable agricultural expansion and investments for illegal deep-well digging, there was little need to reflect on the necessities or even upsides of established custom (Blumi, 2018).

On the other hand, against the backdrop of a divided Yemen during the Cold War and with Ali Abdullah Saleh’s attempts to ‘modernise’ the western orientated Yemen Arab Republic, the Bretton Woods Institutions got their foot in Yemen’s door. Initially supporting a neo-liberal agenda concentrating on the expansive development of high-value and water-thirsty export crops, thus (in)directly encouraging further illegal exploitation of aquifers, international development planners eventually realised the finiteness of Yemen’s aqua. Still, as the following two examples will elucidate, more than once did their naïve rectifying policies fail to understand larger structural socio-political forces at play, which ultimately undermined the effectiveness of their policies and therefore wasted precious time.

First, efforts to escalate irrigation costs through the reduction of extortionate diesel subsidies and ergo decreasing water overuse, for example, fundamentally misunderstood essential power structures in the then unified Yemen (Breisinger, 2011; Ward 2000). Not only did rising diesel prices provoke extremely violent and frequent riots among ordinary citizens with their backs against the wall, but they also challenged the powerful group of large-scale landholders by imperilling abundant agricultural revenues as well as their diesel smuggling business. As described by Phillips (2016), for instance, Saleh’s cronies would buy a barrel of diesel locally for US$25, smuggle it out of the country and sell it for US$300 at sea. As the regime relied heavily on the goodwill of these landowners to stay in power and considering widespread popular resistance, it should come as no surprise that policies aimed at diesel subsidy reductions failed rather miserably.

Second, the turn of the century then witnessed an increased focus on participatory development programmes (PDPs) to tackle the water issue; attempts again displaying an unorthodox understanding of Yemen’s socio-political landscape. Participants were not recruited from the wider population, but from narrow professional pro-Saleh elites with questionable agendas. Combined with the internationally-supported and ever-increasing decentralisation of mushrooming water management agencies, which were headed by the very elites partaking in the PDPs, these new policies further strengthened large-scale landowners and aggravated their exploitation of depletable aquifers at the expense of the (mostly poor) rural and urban majority users (Lackner, 2017). Ironically, PDPs have thus created a Gordian knot around the water issue, rather than solving it. Reform efforts improving the water and living situation of the poor pivot on Yemen’s governance structures. Yet, these structures served the interests of Saleh and his web of cronies and are furthermore engulfed in political quagmires as various sub-sections of the elite exploit them for their own power struggles. As they have also been described as merely ‘institutions to provide senior positions for redundant notables’ (Lackner, 2014:168), reasons for optimism appear rather limited.

As might be expected, issues of water shortage, desertification and mismanagement come at a terrible social price.

Around 70% of Yemen’s population live in rural areas and those who remain are forced to spend ever more time and financial resources on fetching water from ever more distant wells; resources that ought to be spent on education and other key activities for Yemen’s future development. Furthermore, disputes over land and water distribution turn increasingly bloody, killing circa 4000 people in 2007 alone (Hales, 2010). Many villages, however, have already been completely abandoned, rendering the gradual emergence of a potential post-conflict and sustainable tourism industry or the return of agriculture to these areas unlikely. Not only do these migration movements tear apart any kind of social fabric in Dhala’a, al-Baidha or Sada’a (Lackner, 2014), thus further destabilising the war-torn society, but they also result in major influxes of internally displaced people into urban areas. Even without a gigantic wave of refugees, urban centres located in the mountains are already on the brink of collapse. The approximately 40% of the households connected to the public water network in Taiz, for instance, tend to receive (undrinkable) water every fifty to sixty days and are forced to rely almost exclusively on tankers, while Yemen’s capital Sana’a, located at 2200m above sea level and separated from the Red Sea by two major mountain ridges, contemplates pumping desalinated water 140km across nigh impassable terrain to quench its thirst (Lackner, 2017). Similarly, coastal cities like Hodeida, Aden and Mukalla are presently battling rising sea-levels, water overconsumption, saline intrusion into their water reservoirs with reduced communal water resources as a consequence and, due to increased pressure on sewage networks, the steady decline of sewerage – a major factor in the horrific cholera outbreak that currently puts the lives of at least a million Yemenis in jeopardy (Lackner 2017; Camacho, 2018).

Without water there is no life. And while both Lackner (2017) and Lichtenthäler (2014) call for optimism, arguing that careful agricultural restructuring, an increased emphasis on community-based water distribution arrangements and substantial investments in new climate-sensitive infrastructure could turn the tide to Yemen’s advantage, the most destructive and atrocious civil war in the country’s long history is entering its fourth year. Some analysts (Clifford and Triebert, 2016) also highlight the role of the Saudi-led coalition in exacerbating the current water crisis by targeting crucial water infrastructure. It remains to be seen whether desperately needed changes can be implemented in a country trapped by its “modernisations” and engulfed in incessant hostilities. Writing 3000 years after the reign of the Queen of Sheba, the current prospects of Yemen once more boasting spectacular wealth and mystical fertility are less than lush.