Friday, 9 December 2016

Organising academic work

Recent exchanges about research careers have motivated me to update the blog I rather lightheartedly wite some time ago abut oganizing academic work.

I have long wondered why academic employment should not just obey the same regulations as any other kind of job. There is a downside to this. Academic tenure was designed to prevent authorities from sacking academics for having unpopular opinions. Unpopularity could arise from political issues of course. But other matters are at least as important. Freedom to criticize medical and drug research for example is vital for the safety of patients. But how genuine is job safety for "tenured" academics really these days? I really don't know (never had tenure) but reactions would be interesting.

Be that as it may, if academic departments started to run like commercial companies, you would have a situation where everyone's job depended on the firm getting enough business to pay the wage bill. Lets say it costs £150K per year to employ a lecturer adding together salary NI pension and "overheads". Anyone who has had to cost a research proposal can do this now. If we take students fees to be £9k per year then 20 undergraduates per lecturer is a generous allowance.

But what about research? Lets go back a bit to where it started to be essential for departments to get research money. I dont remember the dates at all well, it was a Thatcher plan. But first there was an attempt to divide universities into "R" and "T" (no prizes). This did not work. The next step depended on the fact that in pre Tatcher days money went to Universities from the government in the form of the "science vote". This money was notionally divided into 3rds: 1/3 for teaching, 1/3 for admin and 1/3 for research. Lecturers notionally spent their time in that way. The government took away the research 1/3rd sometime in the early 1990s (I think) and gave it to the Research Councils. So university departments had to win back 1/3 of their funding by applying for grants. I always thought the idea was the this would de facto create the R and T divide, with some universities cleaning up all the research money and thus having to do a lot less teaching (see my previous blog on this). These would then become the elite that of course the children of the rich would attend. Lecturers would not be dealing with hundreds of pile 'em' high students and would be at cutting edge of disciplines (they thought).

The form taken by the missing 1/3 was in the "overheads" paid on the wages of research assistants. 4-5 years ago these were increased from the 40% of salary costs (i.e. a % of salary) to fixed amounts set by the universities. So today if you are paid £25k or £60k your "overhead" at UCL, Essex or Imperial will be around £75K. It is a horrible thing to say but I have always feared this means that research assistants are basically "overhead fodder". The colleges get the same overheads regardless  of how much they pay you so it is in their interests to replace you with cheaper meat as they get the same money regardless.

Back to oganising research today. My vague notion has been that you need to calculate how much "business" you need to support your staff. This is what happens in many industries. It does mean that when business dries up a redundancy situation arises, as it did a few years ago in the National Centre for Social Research. But my idea was that this would be the same for everyone, with no special protection for certain jobs. The flip side would be that everyone joining a department would have to start developing their own portfolio of 'business'. They would be mentored and supported to do so. This is how the Institute for Fiscal Studies operates.

It is quite sobering to consider that it would take 200 students to support a group of 10 lecturers (this would leave some money to employ an admin expert). But this calculation becomes more important now that we are being told by senior university manager that in fact "research does not pay", apparently even that which pays the full "100% overheads". At least I have been told that.

Problem is, if we are all to be beavering away maximising the student experience of our students (not an unreasonable thing to do), where is new knowledge coming from?

Thursday, 16 June 2016

A sociological approach to the relationship between research and policy.

There is so much about "impact" that takes what I think is a naive approach. Here is a rather old attempt I once made to apply a more sociological approach to the relationship between research and policy. Here is was applied to "Evidence Based Medicine " (EBM).

RELATIONSHIP BETWEEN RESEARCH AND POLICY: AN APPROACH FROM SCIENCE AND TECHNOLOGY STUDIES


Evidence and evaluation are taking a higher priority in the delivery of health services, and yet it seems to have been surprisingly difficult to establish which are the most effective treatments and to ensure that these are used. A whole industry now exists around “translational medicine”, which attempts to align research and clinical practice more closely. In such a context, it becomes increasingly problematic to operate with over-simplified notions about the relationship between research and policy. The idea that research takes place in an ivory tower motivated purely by intellectual curiosity, and is then applied to obvious technological, medical or social problems is widely held. This may indeed happen on some occasions, but in practice the great majority of changes in the designs of clinical procedures, artifacts or policies need to be understood as part of a more complex process. Problems in deciding what is the best practice and making sure this is carried out are to be expected rather than wondered at. This more complex view of the relationship between science and practice is reflected, for example, in the call by the US Supreme Court for judges to act as arbiters of the validity of the scientific (including medical) knowledge claimed by ‘expert witnesses’ (Solomon and Hackett 1996). Rather than the scientist being seen as the final arbiter of what is ‘fact’, the claim to truth is seen as an adversarial one.

The late 1970s and 1980s saw the flowering of a number of related fields of inquiry which made it possible to gain a more detailed understanding of the ways in which research programmes are situated within policy-making contexts: in particular the sociology of social problems (e.g. Spector and Kitsuse 1979, Gusfield 1981, Aronson 1982, Pfohl 1985), the social histories of science (e.g. Shapin and Schaffer 1985) and medicine (e.g. Treacher and Wright 1982),  social policy (Lindblom and Cohen 1979, Prince 1983) as well as in the sociology of science and technology (Barnes and Shapin 1979,  Latour and Woolgar 1979/1986, Mackenzie 1981, Gilbert and Mulkay 1984, Collins 1985, to name but a very few and for an overview of debates see Pickering 1992). The term ‘socio-technical system’ is sometimes used to refer to the networks of human, biological and physical actors involved (Bijker et al. 1987). This paper suggests that an approach based on a combination of these ideas with historical work (Macleod 1967, 1974, 1988, 1993; Macdonagh 1958, 1965) may prove helpful in understanding the changing relationship between the production and utilisation of scientific knowledge.


Policy and research: a conventional view

A clear and succinct version of the conventional ‘empiricist’ model of the relationship between research and policy is given by Martin Bulmer in his Uses of Social Research:

governments, firms and other organisations need adequate information on which to base policies which they ...implement. The task of ... research is to provide as precise, reliable and generalisable factual information as possible ... This information, when fed into the policy-making process, will enable policy-makers ... to reach the best decision on the basis of the information available (Bulmer 1982: p. 31).

In the health field, a similarly clear summary of the nature of the relationship is given by Elwood and Gallagher (1984):
the role of social policy ... is to effect change in the search for improvement, and the role of science ... is to provide the firmest possible empirical grounds for change ... (317). It is the role of policy, in their view to determine which questions are important enough to justify devoting scientific resources to. Danger arises, however, if policy actors meddle in the interpretation to data, or if scientists make statements which go beyond its limitations.

The paradigm for the use of research in such a problem solving process in the UK is the so-called ‘Rothschild’ model, in which government or  some other client sets the problem and the researcher produces the answer as it were ‘to order’. This paradigm finds considerable favour in policy making circles, and was imposed during the 1980s on government funded research groups as a way to increase the cost effectiveness of money spent on science. However, this model is not found in practice to give a very accurate picture of how policy makers and researchers interact (Feldman and March 1981; Rein 1983; Bulmer 1982: 48).

Models of the research-policy relatinship
There are a number of alternative approaches to understanding the research-policy relationship (both descriptive and prescriptive), for example:

Linear model
In this model, little attention is paid to the ways in which, or the reasons for this new knowledge may arise. Once in existence, knowledge is regarded as leading to useful policy applications in similarly unexplicated ways. This picture fits the examples of the contribution of physics to the development of micro-processor technology, or of molecular biology to genetic engineering. Implicitly, it assumes that science follows its own agenda but more or less automatically results in technological advance and policy change.

Enlightenment model
In this paradigm, new knowledge is seen to enter the public domain via a more indirect process involving “quality” media, opinion leaders, pressure groups etc. rather than direct contact between scientists and practitioners. It has been described as the provision by scientific information of an ‘intellectual setting of concepts, propositions, orientations and empirical generalisations that will expand the policy makers frame of reference’ (Davidson and White 1988: 2, Gandy et al. 1983). It was intended perhaps more as a model of the relationship between social research and social policy as opposed to science and technology (Janowitz 1972, Weiss 1977, 1979, Wagenaar et al. 1982). However, it is relevant in the case of medicine and health care, where the influence of pressure groups and the media is known to be considerable (Elwood and Gallacher 1984, Gabe et al. 1988, 1991).

Political model
This might be regarded as a version of the Rothschild model discussed above which is more descriptive than prescriptive. It holds that in fact, customers for research are not motivated by objective evaluation of problems. Rather, the policy maker or practitioner, acting as the customer, commissions, not just any researcher, or the most skilled, but one whom they know in advance will produce a certain answer (Caplan 1976, Tizard 1990).  The outcomes of political uses of research will not, of course, appear to be so. However, it will often be found that ‘even where debate includes specific reference to research findings, the rhetoric of empiricism may merely constitute a cosmetic 'rationality'’ (Davidson and White 1988: 6). This approach has some advantages over the first two. The ‘origin of the problem’ is made more explicit, as is the articulation between researcher and practitioner or policy maker. Customers are regarded as actors with interests, and in order to advance these they need to be able to appeal to the right kind of facts. The researcher is given the job of producing these. It is an awareness of this process that has led to the judicial treatment of expert witness testimony in the USA. Experts are no longer to be treated as authorities, but as partisans.

Partisan mutual adjustment model
This model acknowledges that scientists as well as policy makers live in a real world where careers and ambitions play a role the negotiation of both technical and political decisions (Lindblom 1965). Actors on all sides are ‘partisans’ for their own interests, but will adjust their claims in the light of the balance of forces and the desirability of alliances with other groups. It is not only the customer ordering the ‘Black Magic rather than the Diary Box’ (Bartley 1992), but the scientist/contractor who can see on which side her bread is buttered, and deliver accordingly (Hanmer and Leonard 1984). It must be stressed that those who advance variants of this model of the research-policy relationship are not claiming that researchers cold-bloodedly fiddle results to please their patrons. What we learn from the sociology of science (Hacking 1992) is that research itself is such a heterogenous and indeterminate process that such demands are absorbed within its normal practice.


An alternative approach: policy issues as social problems
This paper will argue that it is not the mere existence of research findings, or even the opinion of the academic community as to their quality, which  ensures the application of new knowledge in practice and policy. It will argue for the relevance of advances in the sociology of science and technology in France and Britain which have emphasised the socially located nature of science. These studies have shown that research takes place within a context of political and pressure group negotiation, in which scientists enter the policy arena as part of a form of entrepreneurialism in order to gain support for their work. They show how central this entrepreneurialism is to the doing of successful science,and describe the skilful work of a social nature in which scientists must engage even before the work of fact-creation can begin.

Pressure groups and scarce resources
In this model, the relationship between research and policy may be regarded as taking place within ongoing ‘social problem processes’ -- claims and counter claims on the resources of governments and other major resource holders (Schattschneider 1960; Ryan 1978; Richardson and Jordan 1979). These processes take place within ‘issue communities’ comprising members of relevant pressure groups and professions and journalists,  as well as  ‘political administrators’ (as Heclo and Wildavsky [1974] term both politicians and civil servants) and the scientists themselves whom we might regard as the principal actors.

The notion of the social problem process is derived from the sociology of deviance. It refers to the process through which one or more pressure groups set out ‘moral claims’ to the effect that a certain phenomenon or condition is objectionable, ‘a problem’ and ‘something must be done’. This is regarded as a claim on scarce resources (of businesses, companies, or governments), and therefore likely to be opposed (Manning 1985). Claims proceed through a set of stages in which firms or governments may respond by denying the problem, institute research to show that it does not exist, or is minor (Downs 1973, Spector and Kitsuse 1979, Manning 1985).

‘Factual claims’ are also made as part of the social problem process. These may include debates over the size, strength and reality of the problem, as well as the effectiveness of proposed solutions. It is into this fact-claiming process that research comes into play most obviously: for example the debates on leukaemia clusters, the effects of passive smoking or BSE. Social problems can get lost in ‘loops’ where debate goes round in inconclusive circles, either in the form of irresoluble disputes between scientists, or in more institutionalised forms such as Royal Commissions.

The other groups involved in social-problem claims and counter claims are those  Heclo and Wildavsky call ‘political administrators’: politicians and civil servants. Put very crudely and briefly, both politicians and officials in ‘spending’ departments of state (i.e. those other than the Treasuries) have an interest in enlarging their spheres of action. It is a benign contradiction within modern government that while some careers and political reputations may be made by restricting public spending, more are made by enlarging it: ‘policy-makers, in office for a relatively short span of time, are anxious to 'make a mark' on policy and hence on society; ...’ (Davidson and White 1988: 8; Walker 1987) For the politicians, this is what gets them noticed as significant reformers; for the officials, this is quite simply what keeps them in work (Prince 1983). As Hall et al. put it: ‘The same hierarchy that provides the basis for regulating issues and initiating policies is also a career structure. It represents rewards, satisfaction, opportunity, disappointment ... for ...individuals’ (Hall et al. 1978: 66).

Role of the media
From sociological studies of the media comes the idea that groups involved in social problem processes make strategic use of journalists. In one view, this takes the form of ‘information subsidy’ -- the journalist receives a ‘pre-digested’ version of a complex story (Gandy 1982). The journalist is thereby ‘subsidized’ in terms of the amount of effort involved in getting the story, and in return, the story is more likely to ‘make the papers’, and gain access to the agenda for public debate. Information subsidy may come from any of the parties in the issue community. Very commonly it comes from scientists or professionals who are also participating in pressure group activity.

All participants in a social-problem process are engaged in one or other form of struggle for career progression. The individual must ‘have a bright idea and fight for it’ (Bartley 1992), and then invest the resulting career capital in a skilful manner. Pressure groups are often a good source of bright ideas, as they are of new stories to fill newspapers. This leads in many cases to a symbiosis between what might appear to be conflicting interests. In order to gain the upper hand in a scientific competition, scientists may risk being ‘misrepresented’ or ‘sensationalised’ by seeking out the attention of journalists. This often takes place at strategic moments, such as when decisions on funding are being taken, or when there is a threat to close down a research facility (Silverstein 1981). Similarly, in order to advance her career, an ambitious official needs to be one up on her peers in the bureaucracy. One way to do this is to have good contacts in pressure groups and present chosen ideas to politicians as potential reforms.  Covert co-operation may take place in various ways between journalists and officials on the one hand and pressure groups or ‘deviant’ scientists on the other.

Marketing expertise
Professional groups are an essential part of the social problem process. In many cases, a whole process is driven by one or more ‘orphan professions’, who seek to have some condition/s defined as a social problem in order to offer themselves as (at least part of) the answer. I have called this ‘sub-professional entrepreneurialism’ (Bartley 1992). This may happen after the end of previous social-problem processes which leave expert groups high and dry. For example, several notable reforms in the area of public health took place after the end of the Napoleonic Wars, a time when overcrowding in the professions was combined with de-mobilisation of military officers. A whole social layer was thereby available for recruitment into the growing inspectorates produced by a series of reforms such as the Public Health and Passenger Acts (Lewis 1965; Reader 1966; Macdonagh 1958, 1961, Novak 1972, Johnson 1972). This is another way of looking at what American sociologists of deviance have termed ‘moral entrepreneurialism’, except that it makes more explicit the fact that what is being marketed is the professional group itself.

Networks of knowledge
Scientists also engage in similar activity, which I will call  ‘sub-disciplinary entrepreneurialism’. Whereas the sociology of deviance has paid relatively little attention to the activities of scientists within social problem processes, their activities are extensively described and analysed in the social study of knowledge (SSK) and of science and technology (STS). STS forms something of a radical departure in the understanding of science. One would not normally imagine that the ethos of science is compatible with entrepreneurial activity. And this was indeed the view taken in earlier sociological approaches such as that of Merton. For Merton, ‘good science’ needed no explanation other than rationality and the true state of nature in the external world. Social or political factors, in his view, only operated to produce conditions under which irrationality prevailed, and the progress of science was blocked (Merton 1973).

Later studies, however, indicated that successful science is not necessarily a less ‘social’ phenomenon than unsuccessful science. STS aims for an understanding of the entrepreneurial skills of successful scientists -- those whose factual claims are accepted as ‘knowledge’. Successful knowledge production depends, according to this perspective, on their ability to engage in what Callon and Latour have called ‘translation’ (Callon 1981, 1986; Callon and Law 1982; Latour 1987). Scientists must undertake extensive network building (the approach of these French scholars is sometimes referred to as ‘actor-network theory’), for example convincing funders that it is ‘in their interest’ to support the work which the scientists wish to do, convincing journal editors to publish their papers, convincing other scientists to accept their results. This work extends far beyond the laboratory or research unit itself. As Latour puts it

If you get inside a laboratory, you see no public relations, no politics, no ethical problems,  no class sturggle, no lawyers; you see science isolated from society. But this isolation exists only insofar as other scientists are constantly busy recruiting investors, interesting and convincing people. (1987: p. 156)

These activities may include enrolling a relevant sub-profession with a parallel interest in claiming that a serious problem will be solved or avoided, for example, by an alliance with a pressure group, as is found in discoveries of disease clusters or controversial new treatments for diseases suffered by articulate communities (Nicolson and McLaughlin 1988; see also Callon et al 1986).

The activities which take place within actor-networks affect the products of scientific and technological activity at the most basic level (Mackenzie 1981), although this process is never described in accounts of scientific work. According to Latour, one of the most important activities in science is the artful elimination of all traces of the social or the political from the final reports of findings. Like all other groups involved in social-problem processes, scientists are using them to further their claim on resources. In Pickering's words ‘doing science is real work and ... real work requires resources for its accomplishment’ (Pickering 1992:2). Resources are claimed, in this case, by offering accounts of events which are  ‘objective’, and neutral, that is, unaffected by social or political interests:

We need a court of appeal which is maintained outside of political debate, in order to be able to put an end to political dissent. Scientists and politicians agree on this necessity. That is why, despite being shown by every scientific controversy that scientific proof is negotiated, we cling to the opposite idea for all our lives are worth (Latour 1981).

Hence, to go back to the example which opened this paper, the sense of shock we feel at finding that a US court has demanded that scientific evidence in trials no longer be treated as non-adversarial. Science's claim to provide authoritative rather than partisan accounts is seen to crumble.


Fact creation in the social-problem process
To recapitulate the argument of this paper: In the social problem process, typically an entrepreneurial scientist or professional forms an alliance with a pressure group organised around a relevant issue. What makes it relevant is that the issue could shape up to be the sort of ‘problem’ to which the entrepreneurs might offer their skills as ‘the solution’. They put the issue on the public agenda by giving subsidized information to the media. Government then produces an ‘official response’ which in turn may well create business for another group of professionals and/or scientists .

Out of this process emerge a series of factual claims, emanating from both sides. Some of these claims may attain the status of accepted fact. But this will only happen if there is a position which is agreeable to the majority of participants in the issue community. Participants will agree to a factual claim if it suits their interests (they will then be ‘enrolled’ in Latour's terms). The more widespread and diverse the issue community (which may perhaps be regarded as another way to conceptualise an actor network, except it gives more primacy to actors from the policy making sphere) the harder the fact will be. A greater amount of work of enrolment and ‘interessement’ will have had to be done, and a statement which is found useful by a wide variety of highly diverse groups is going to find more defenders than one which is more local and specific.  This is the key to the actor-network theorists’ paradoxical claim that hard facts are not less social but more so (Latour 1987).

Once we start to look at the research-policy relationship as merely a facet of social negotiation over markets for expertise and, ultimately, the allocation of scarce resources, it is far easier to understand why clear instances of research influencing practice or policy are so hard to come by. For one thing, as indicated in several of the models described above, it is questionable whether the direction of influence is very often from research to policy at all. Several commentators see it as quite the opposite. The actor-network approach takes a slightly more complex view, that facts themselves emerge from accomplishment of an alliance of groups both inside and outside the scientific community:

technical objects must be seen as a result of the shaping of many associated and heterogenous elements. They will be as durable as these associations, neither more nor less. Therefore we cannot describe technical objects without describing the actor-worlds that shape them, in all their diversity and scope (Callon et al. 1986).

Social problem processes may end in a wide variety of different ways. Perhaps the most unusual of these is that one side changes its view ‘in the light of evidence’, leading to visible change with an identifiable research input. Key groups may lose interest and move on to more promising terrain; some sort of institutionalised conflict may result in the setting up of new bodies (for example in the UK the ‘Committees on Medical Aspects’ of various environmental factors such as food quality (COMA), and air quality (COMARE); in the USA the National Insisitute of Occupational Safety and health (NIOSH) and the Food and Drug Administration (FDA) are examples).

Practical implications for health services research
This view of science (both pure and applied) as taking place within social-problem processes may have implications for present concerns relating to the dissemination of innovations and evidence based medicine. Perhaps it is most consistent to regard EBM itself as part of a social problem process, one which combines concerns with clinical efficiency, the cost of modern medicine and wider economic management.  In this process, claims and counter claims are made as to the necessity for health service rationing. Knowledge claims relating to the effectiveness of various treatments therefore take on a higher profile than would previously have been the case. This is agreeable to certain subprofessional and subdisciplinary groups, who can see their relative importance within the ‘medical-political complex’ growing at the expense of previously more favoured groups (primarily the clinical specialists). For example, major funding was made available to a centre for the analysis and dissemination of information on the effectiveness of different procedures in pregnancy and labour, and later for all drugs and procedures (the ‘Cochrane Centre’) and for an NHS ‘Centre for Reviews and Dissemination’ which gathers and evaluates studies on a wide range of clinical and non-clinical issues and disseminates its findings on the ‘best practice’.

The technical capacity to estimate effectiveness of treatments has long been available, and advocated by epidemiologists and health service researchers. But it took an organizational upheaval in the UK National Health Service to bring about a situation where such claims were given any credence. Crudely, reforms of the UK health service have given far more power to health service managers, whose task is to see that treatment objectives are fulfilled within the limits of their budgets. Within this organisational structure, epidemiologists and medical statisticians' knowledge claims are valued for their usefulness when set against the claims of clinicians and the demands of individual patients. Managers can use these more objective estimates of effectiveness enter into the implementation and justification of rationing, which ensures a continuing market for the expertise of those who provide them.

Of course, the picture is nothing like as simple as this in practice. An STS perspective would see it as problematic to approach evidence based medicine  as a series of attempts to ‘disseminate the facts about most effective practice’ beyond their original discoverers. Latour has criticised a simple ‘diffusionist model’ of the translation of knowledge into practice. Diffusionists ‘simply add passive social groups to the picture’ who may ‘resist’ the innovation. We should realise, he insists, that ‘ the obedient behaviour of people is what turns the claims into facts...’. In the STS model of the science-policy relationship facts themselves emerge from knowledge claims when these claims are adopted by wider networks of people with greater diversity of interests and concerns. The successful fact builder is one who makes the most skilful efforts to discover and enrol the interests of new groups.

 We may also have to acknowledge that there is no single moment when a fact is accepted and becomes part of an unchanging knowledge base. The very nature of the ‘fact’ may undergo change when new groups become involved in the process. Hence the chronic indefiniteness of much health advice (and other forms of medical knowledge) which gives rise to reportedly widespread public cynicism and the failure of health education campaigns. The complex and diverse combinations of interests and concerns within the issue communities within which research on many health issues takes place means that it is extremely difficult to stabilize a wide enough network who accept any given claim.

Perhaps such complexity is endemic to a situation in which research and professional activity are so closely related, and done by the same people, as is the case in medicine. In such a case, professional interests intervene between factual claims and ‘end users’ in a different manner than would be found in the case of, say, physics and electrical engineering. In law, a ‘pure’ profession not hybridized with ‘science’, all claims are regarded as essentially adversarial. Medicine is in the difficult position of combining professional with scientific modes of legitimation.

There is in fact rather an interesting parallel between some of the  discourse on Evidence Based Medicine and Latour's history of Pasteur (Latour 1984). At first, physicians in late 19th century France were not interested in Pasteur's ideas. The private physicians' journal Concours Medicale in 1895 described itself in September 1894 as having ‘voluntarily abstained from speaking of microbes, rodlike cells, comma bacilli and cocci ... knowing that practitioners, its usual readers, would not care very much for that overspeculative ... hodge-podge’ (quoted in Latour 1987: 132). The conventional explanations for this included the greater popularity (at that time) of public hygiene, the corporatist interests of medicine, the backward or reactionary nature of physicians' beliefs. According to this account, without these resistances  the ideas of Pasteur would have swept through the practice of medicine by virtue of the momentum bestowed by their truth. Then, after 1894, the tide turned and doctors accepted Pasteur's ideas. In conventional histories of ideas this change is attributed to change in the attitudes of doctors. What this overlooks, Latour claims, is the long and patient work of negotiation between Pasteur and the physicians to arrive at an altogether different technical translation of the ideas.

How could the private personal physicians agree to Pasteur's ideas of contagion? If they accepted the notion of contagion this would have had drastic results for their confidential relationship to patients - not to speak of the consequences of contagion for converting the trusted physician into a potential vector for disease himself. Why should they be enthusiastic about prevention? Unlike army doctors, who were salaried civil servants and the first group to accept microbiological ideas, the physician's market depended on at least some people needing to be cured. The acceptance of microbiology by private physicians came only after Pasteur had developed a diphtheria vaccine which had some curative effect instead of a rabies vaccine which had only a preventive one.  The expression of the scientific theory which got it accepted by the wider network of the medical profession was therefore one more congruent with the role in which doctors felt comfortable: ‘As soon as they were able to go on doing what they had been doing, the same physicians who had been called narrow and incompetent immediately got moving ...’ (Latour 1987: 127)

This historical account may offer some help when asking why it is so difficult to arrive at consensus on the effectiveness and cost-effectiveness of treatments and why, even when this appears established, it is still hard to ‘bring practitioners in line’ with the findings. In this paper it is not possible to pursue these questions in any detail -- the aim here has been to suggest a social-historical framework within which case studies might usefully be. We can perhaps get as far as asking: in order to carry out an analysis of EBM within the perspective, what questions should be asked? The orthodox method might be to attempt to understand the resistance of [some] doctors to new knowledge and techniques. This then runs into the problem that in many fields, such as prevention of heart disease (a disease which accounts for almost half of all premature male deaths) definitions of what counts as new knowledge constantly change. In the perspective outlined here, we would be more inclined to start by asking what are the important groups involved in social-problem claims making (‘heart disease is too prevalent in our society’), moral claims (‘something has to be done about it’), factual claims (‘we know what should be done’) and subprofessional entrepreneurialism (‘we are the people to do it’). Who is making these claims? Why now?  What are their objectives? What alliances is each group pursuing? And how is the pursuit of alliances (‘network building’) shaping the nature of the claims that are made over the course of the process as a whole? This, of course, includes the factual claims about how nature, (the body, the mind, the microbe, the molecule etc.) really is, how the machines or techniques actually do work and so on as well as claims about what the groups themselves can really do and moral claims about how the world ought to be.

CONCLUSION
The growth in popularity of Evidence Based Medicine is a fascinating phenomenon which may present social and historical studies of science with new challenges. According to Latour (1987) the two most important sets of alliances made by scientists have been those with the military and those with medicine, because ‘in both cases money is no object’ (Latour 1987: 172). This is a statement which will cause some astonishment to readers in the late 1990s. Although some of the most powerful pressure groups in Western countries are still those which protect the interests of the medical profession, these countries have all been involved in efforts to cut their medical budgets. However, just as the actor-network theory would lead us to expect, these efforts to control expenditure have duly broken open many of the factual claims made in the name of medical science. It is a tribute to the power of medicine's previous alliances that so many of its factual claims have been accepted so readily, without the need for alliance with other groups such as statisticians or economists. How will new facts about the effectiveness of treatments emerge? The theory would predict that this will only happen when a new alliance of forces has been settled. So it is not a case of finding out which facts are the real ones and then trying to commit clinicians to their use: rather it is only when there is a new settlement between clinicians, basic scientists, political administrators and whatever other significant interest groups enter the field that new knowledge about clinical effectiveness will become established.


Thursday, 5 May 2016

Mayhew and Smith “An investigation into inequalities in adult lifespan”

The Introduction to this paper states that “the gap in life expectancy between the shortest and longest lived is widening for the first time since the late 1870s”. The widening is attributed to the fact that lifestyles are the main reason for the trend because “Men in lower (sic) socio-economic groups are the most lkely to make damaging life-style choices”
What are the data used to reach these conclusions? Wisely, the authors only start to consider inequlity from the age of 30. Infant mortality would be pretty hard to attribute to foolish choices of health behaviour. The measure used is an Inter-Percentile Range (or IPR) which compares the average age at death in the 5% of people who die youngest with that in the 10% of people who are oldest at the time of their deaths. This is done separately for men and women  To give examples from their Table 1: in 1879 the average age of men who lived longest was 85.6 years compared to 39.7 years in the 10% that lived the shortest time, while in 2010 it was 95.7 years for the longest lived 5% and 62.4 years in the shortest-lived 10%. This gives an “inequality gap” of 44.9 years in 1879 and 33.3 years in 2010.
The argument that “inequality” is widening is based on a comparison of the difference between the top 5% and bottom 10% between 1879 and 1939 (where it decreased by 7.7 years for men) and the same difference between 1950 and 2010 (during which time it decreased by only 1.2 years).
This is a very interesting use of the Mortality Database, a rich source. The first thing I noticed was the huge difference between the changes over time in the top 5% and those in the bottom 10%. From 84.6 years to 97.5 is a hefty one, but a lot less than the difference between 39.7 and 62.4. The longest lived 5% gained around 13 years while the 10% who died youngest gained around 22 years. The paper also contains equally interesting data for women.
This analysis complements to the well-known increase in health inequality that motivated the Black Report, the Acheson Report, the work of the Marmot review and similar work. We know that health inequality increased steadily from the 1930s to 2001. But what these reports mean by “inequality” is totally different to what Mayhew and Smith mean. They are talking purely about the average age at death in groups defined according to whether the members were among the shortest or longest lived at any given period. The Black report and its successors were talking about groups based on social class. Recent reports from the Marmot Review group define groups in terms of the level of deprivation in a given residential area. In Mayhew and Smith’s analysis we have no idea about the income working conditions, residential conditions or occupation of anyone. In fact the idea that their analysis says something about health inequality requires us to assume already that a longer life has something to do with income or other measures of socio-economic position.

Similar work was done in the 1980s using a Gini coefficient for age-at-death. This work as far as I remember did not exclude deaths in childhood. You can think of the Gini coefficient as a kind of variance around the mean age at death. In the 19th century and early 20th, many more infants and children died, giving a far wider range of ages at which lots of deaths occurred. So no surprisingly the variance in age-at-death fell sharply over the 20th century. The authors of these studies (forgotten who they were ) argued (unlike Mayhew and Smith) that this was inconsistent with studies showing increasing health inequality. What makes the 2 types of study similar is that in both cases we actually have no idea about the socio-economic conditions of the people who were living longer or shorter lives. So in neither case is it possible to attribute any social cause to the demographic changes. 

Wednesday, 30 March 2016

Why it is hard to tell if health inequality in UK is increasing or decreasing

One of the things I was most annoyed by while revising my book on health inequality was the discontinuity in some of the sources of data. Perhaps the most important of these is the interruption to the series on class differences in mortality that goes back to 1931 in England and Wales. I had thought it would be an easy matter. In the 1st edition of the book there is a table that shows how social class differences in mortality during working age steadily increased up to 1991. In 1931, mortality in working age men in the most advantaged social class made up of professionals and managers was around 10% less than the average for all working age men and mortality in the least advantaged class made up of nonskilled manual jobs was about 11% higher than average. By 1991, the equivalent figures were 34% lower for the most advantaged class and 89% (yes, that is not a typo) higher for the least advantaged. All of these figures are adjusted to take account of the fact that the different social classes may be made up of people with different ages. For example, men may be older by the time they get into management (although if this influenced the result it would in fact be the other way around as older people have higher mortality).

So I thought, no problem, lets look up what happened to these figures in 2001 and maybe 2011 as well. However, there had been 2 large changes to the way the figures are calculated.

The first and least problematic of these is that the way in which it is decided which occupations go into which  social classes has been clarified and put on a much more scientific basis. If you want to look more closely into this the web site http://tinyurl.com/h3yxh34 might help. Although this "class schema" , called the NS-SEC, is not the same as the "Registrar General's Social Classes" (RGSC) that were used betwee 1931 and 1991, we know from lots of work that different health measures do vary by NS-SEC in a very similar manner to RGSC. And because there is a clear logic to why occupations are put in the different SECs, it means that the study of health inequality using this measure should get a lot more scientific than it was before.

What is more infuriating is where the numbers needed for the calculation of class differences in mortality now come from.

Between 1931 and 1991, the numbers came from 2 sources. To get a rate of mortality you need a numerator (the number of deaths in a social class) and a denominator (the numbers of people in that social class). Up to 1991, the numerator came from death certificates, because people's death certificate includes what their job is. And the denominator came from the Census, because at the Census you know how many people in each occupation there are in the country, add up the appropriate occupations into the social classes, and Bob's your uncle. The limitation here is that social class differences in mortality can only be calculated every 10 years when there is a Census. This way to calculate health inequality is called the "Unlinked method".

However, in the 1980s there was a bit of a panic about the way in which class differences in mortality had been rising to much. Some people guessed that the unlinked method might give a biased impression. What if people's relatives told the Registrar of Deaths a higher status job than the one they really had before they died? In fact, if this had been happening (think about it) it would have reduced  health inequality, not increased it. But never mind, the outcome was the most wonderful data set. The ONS Longitudinal Study linked 1% of the population at each Census of England and Wales to future events, such as mortality. So instead of the numerator and denominator being take from different data sources, we could now calculate class specific rates of mortality from data from the same people: they gave their occupation to the Census and this could be linked to the information on their death certificate. But this actually led to some pretty big problems of its own.

I am just beginning to realize what a complicated topic this is! It stands as an example of the fact that "Big Data" may entail a lot more thought than some people seem to realize.

To cut a long story short, eventually the estimate of the size of class differences in mortality came to be taken from yet another set of numbers. This time the numerator was taken from death certificates again (with the occupation of the deceased person taken from the certificate) and the denominator was taken from something called the Labor Force Survey (LFS). There are some advantages to this. The LFS is done every year (unlike the Census). Although it does not count everyone in England and Wales, it is a large survey and the numbers can be multiplied ("grossed up") to give an idea of how many people are in each occupation in each year. BUT the LFS will have non-response. Unlike the Census, it is no obligatory for a person to take part in the LFS. So we have now moved from a denominator taken from an obligatory census of everyone in England and Wales, to one taken from an annual survey of (I think) around 40,000 people, some of whom may refuse to take part.

On the one had, this should not necessarily lead to an under-estimate of social class differences in mortality, because non-responders to surveys tend to be people living in more adverse conditions. Since everyone gets a death certificate, it is most likely that more disadvantaged people are more likely to have their death recorded than their occupation. Which would tend to give higher death rates in more disadvantaged groups. In addition, the way the measure is calculated has changed. This is probably a sensible change,

However this may be, the National Statistics office for England and Wales (Scotland and N Irelnd have thier own organisations) have given us as near as possible an estimation of what has happened to social class differences in mortality since 1991. Instead of the Standardised Mortality ratio used between 1931 and 1991, which compares the mortality rate in all social classes to a notional "average", we now have a measure that just gives the mortality rate per 100,000 adjusted to take account of age. ONS usefully goes back to 1971 and calculates the mortality rates in this way, then presenting a ratio of the rates in the most versus the least advantaged social classes. Having done this, what we see is that in 1971 this ratio was 1.8 (working age men in the least advantaged class were 80% more likely to die than those in the most advantaged), while in 2001 and 2010 the ratio was 28, (men in the least advantaged class were 180% more likely to die). Bear in mind that this is a comparison between the very best and the very worst employment conditions, not between the best and the average or the worst and the average. So it is bound to look rather a large difference. Which it is.

What would the comparison look like if we could go back to 1931 or even 1961? There is just no way of knowing.

Monday, 28 December 2015

An example of qualitative work?

I promised to give an example of what I thought might be one approach to doing qualitative research on health inequality in an unfamiliar culture.
This came from merely a conversation I once had with someone from Hong Kong who insisted there was no such thing as social class there. Social class is a property of the industrial structure, and I don't know anything about industrial structure in Hong Kong so I could not argue with this. So if one wanted to do research on health inequality there, how to proceed?
I decided to try asking about social status. Using my understanding of caste systems (from undergraduate days) I zeroed in on some questions about co-mensality and intermarriage. I started with intermarriage and did not (as it turned out) have to go any further.
"Lets say you had a daughter and she told you she had fallen in love and wanted to marry", I proposed to my friend. "What sort of guy would you be hoping she would want to marry?"
"Well", he replied, "of course he would need to be Chinese" (my friend is Chinese).
I said "OK, but would there be anything else about the potential husband that you and your wife would really prefer?"
My friend thought for a few minutes. At last he said
"Oh! You mean colour! Yes, of course, I understand".
So here we have an example of a discovery about a social status hierarchy. I had previously had no idea whatever that, within the Chinese community in Hong Kong (which I had realised was of high status and pretty endogamous), there were additional gradations of colour.
But if I had not already had a lot of ideas at the back of my mind, ideas that arise from reading theories about social inequality of different types, I would not have (1) been aware of the difference between class and status (2) had any idea of how to ask questions that might elicit relevant responses for the discovery of status systems.
I later discovered, partly by talking to experts and partly by reading additional literature, that within the African-American population there is well known to be a so called "pigmentocracy" in which those of darker skin colour have a lower status. This can be shown to even have associations with health. That indicates to me that the colour hierarchy is a very profound and stable phenomenon that "gets under the skin".
Since becoming aware of this phenomenon I have learned about its effects on the lives of several friends. This is anecdotal stuff of course. But if I had not been taught about the criteria for caste membership, I would never have even begun to realise that "pigmentocracy" existed at all, certainly not in Hong Kong.
The point being that without asking questions informed by social theory, a basic feature of social inequality might have been totally missed by qualitative work.

Tuesday, 15 December 2015

Are health inequalities less marked during "youth"?

Someone tweeted me a question about the famous (some years ago) "Health Equalization in Youth" hypothesis and I said it would take more than 140 characters to reply. So here it is in a few more characters

For a start it is called "Health Equalization" because health inequalities are very marked in the perinatal (not neonatal) period and in infancy. But between around 14 and 18 (depending on what paper you are reading,sometimes it goes higher) years of age it seems that health inequality is indeed lower than after around age 30. It does depend on how you measure health, however. Social class differences in average height, for example, are just as great in "youth" as earlier or later in the life course (or they were last time I looked). Class differences in, for example, louse infestation are also large. It is in mortality that we really see the smaller health gap.

Well, not many people, thank goodness, die during adolescence (this is changing now, but it was true in the 1980s and 1990s). At this time, cancers were one of the more common (not common really, just more common than heart disease which was the biggest killer in the adult population) causes of death in adolescents and cancers do not generally have a big social gradient. Apart from lung cancer, and this is unknown in young people. Leukaemia, for example, did not at this time have much of one as far as I remember.

The importance of the idea, however, was in the light some authors thought it shone on how health inequality in adulthood emerges. If, they argued, inequality in health according to the social class of one's parents was low in adolescence, but inequality according to one's own occupational class then emerged strongly by age 35 or so, maybe this tells us something about it. Perhaps it is that there are unhealthy adolescents scattered through the population at random, regardless of their parents social class. During youth, the deaths of these unhealthy adolescents will not, therefore, show a health gradient. What causes inequality in mortality later on in the life course is that the unhealthy ones (who were going to die anyway) could only get the lousiest, worst paid and most hazardous jobs? So although it looks like low pay and job hazards cause early mortality, actually it is poor health which causes both early mortality and having a lousy job. I know this sounds ridiculous, but that is what the argument was and why it got so much air-time.

I remember talking at a meeting to some people from a support group for people with chronic kidney disease. One the the conference speakers had put this idea forward. The people with kidney disease were very amused. They asked if anyone had the slightest idea what it was like, and how silly the idea was that people like them would be selected into things like mining, building work and ship building. They were looking forward to telling all their friends about the ludicrous academics who thought building workers have high mortality because they already suffered as children from diseases like kidney failure.

One thing you have to remember, especially in today's economic environment, is that the statistics on mortality that were being used did not include people with permanent disability who had no stable occupations at all. You could only be included in the statistics on health inequality if you had an occupation which defined your social class. So people who had chronic illness from childhood that prevented them from working were excluded altogether. And this was also in the days before people with chronic illnesses were being forced into various types of low paid work. So in fact, people in hard jobs were selected at the beginning of their working lives for good health. This has been called the "healthy worker effect". But that is another story.

As it was, at the time, there were quite a few people, including policy-makers, who for quite some time believed that health inequality was due to this process whereby sick people were recruited into tough jobs. It was called "direct selection".

The most important papers were written by Patrick West & colleagues:

West, P.  Health inequalities in the early years: Is there equalisation in youth? Soc Sci Med 1997 (44(6)) pp 833-858

West, P. Macintyre, S. Annandale, E. Hunt, K.  Social-class and health in youth - findings from the west of scotland 20-07 study. Soc Sci Med 1990 (30(6)) pp 665-673

and David Blane & colleagues:

Blane, D. Bartley, M. Smith, G D. Filakti, H. Bethune, A. Harding, S.
 Social patterning of medical mortality in youth and early adulthood. Soc Sci Med  1994 (39(3)) pp 361-366

Monday, 5 October 2015

Beware of "Resilience"

Why has resilience become such a big buzzword in social science and even in social epidemiology? In 2003 or thereabouts a group of us formed a research network on "Resilience and Capability" funded by the ESRC. Even at that early point there was quite a story attached. It took them ages to decide whether to fund us or not, and this turned out to be because, naively, we had not in our wildest imaginings realised what the topic was supposed to cover. The "call" had come out in 2002 just after 9/11 and was meant to attract proposals dealing with civil defense. The kind of resilience the funders were interested in was how cities, for example, might be prepared to resist the effects of terrorism.Our group's theory expert was Ingrid Schoon and our methods expert Amanda Sacker. They had been developing a model of "Risk and Resilience over the Life Course" in a previous project. Ingrid wrote a book Risk and Resilience which is highly recommended. We did have 2 project leaders, Rich Mitchell and Sarah Curtis, who, as geographers, knew about urban processes. We were accused by ESRC of not being sufficiently interested in work and employment despite the group including Stephen Stansfeld, one of the world leaders in the social psychology of work. Our expert on later life was David Blane, who, like Stephen, has a medical training and is interested in biological processes of resilience and vulnerability. And our public health leader was Margaret Whitehead. I told someone the other day that I felt like an impresario in Hollywood putting together an irresistible cast for a movie. And it was indeed a fantastic group, supported also by people like Dick Wiggins and Danny Dorliing. As we later found out, the ESRC despite their original plan, just could not resist it.

Anyway the point of this is that in 2002-3 Resilience in terms of a human characteristic was not on anyone's radar, or at least not powerful people with money. Sometime shortly after the end of the research network, Michael Marmot pointed out to me that "everyone was talking about Resilience". And at a similar time, Capability started to be a big word in social epidemiology as well. In the Network, we all read Amartya Sen's work on it and used his ideas to guide that bit of the work. My own rather basic idea was that over the life course people who passed through "resilience-promoting environments" in their family, neighbourhood, school and work, had strengthened levels of capabilities to "live a life they have cause to value" in Sen's words. For example, one way we tested this was to see whether people who had experienced more warm relationships with their parents (fathers were just as important as mothers) had a more "securely attached" way of looking at the world. They did, and this seemed to also help the securely attached people to reach the higher levels of the Civil Service in their career. But secure attachment was not important for everyone, and this is a vital point. It only made a difference to people who had also not had an elite type of education.

This result (which I am using here as I worked on this paper myself, in contrast to the vast majority of papers from the Network, as I was really the kind of administrator who helped to make sure the bills got paid and everyone had enough sandwiches at the Network meetings) highlighted a couple of important issues. The first was that resilience is only important in the presence of adversity. We had quite a lot of discussion about this, and Ingrid distinguished several different definitions of resilience, of which this was the one we adopted. I thought it was important to distinguish it from just any kind of beneficial experience over the life course. I was worried: how could we distinguish between a resilience factor and the simple fact that people who experience only 2 adversities will do better than those who experience 3? This is the well known  "accumulation" model in life course research. We didn't want to do just more of that (although it was not quite such a cliche in 2003).

So if someone has experienced poverty in childhood and a stressful job, they will tend to have worse mental and physical health than someone who only had poverty in childhood.But does this make a financially secure childhood a resilience factor?  No, it is only a resilience factor if it is more important to people who have stressful jobs than it is to those with better jobs. It is what is called an interactive rather than an additive relationship. Lets say childhood poverty and job stress each take a score of 1. In an accumulative relationship between the 2 factors, the risk of mental ill-health (lets say) would be 0% in someone with a secure childhood and a nice job,  50% in someone with either a secure childhood or a nice job and 100% with a poor childhood and a stressful job. If childhood financial security is a resilience factor someone with a stressful job but a secure childhood would have a zero risk of mental ill-health.

Accumulative relationship

Childhood
Job

Stressful
Not stressful
Poor
100%
50%
Secure
50%
0%


Childhood financial security is a resilience factor

Childhood
Job

Stressful
Not stressful
Poor
100%
50%
Secure
0%
0%


The second point of debate within the Network was the role of childhood experiences and parenting. The paper I worked on just happened to include data on childhood experiences. But I confess that I also have a bit of a bee in my bonnet about this kind of thing. Ingrid was not having any of that! I remember her phoning me on my mobile in the middle of Marylebone High Street to straighten out some of my ideas. Similarly, Margaret was very concerned that resilience was being used as a sop or a fig leaf, as some of the people from the WHO that she was working with had feared. Ingrid and our colleague Jenny Head were concerned about "parent-blaming". They were quite clear that resilience could be fostered at any time in the life course by the right kind of environment: school, workplace, neighbourhood for example.

One implication of this kind of thinking for policy is that a resilience-promoting environment is in fact most important for people with the most adversities in the rest of their lives. It is the total opposite of what tends to happen in real life. A good school is most important for children with families suffering the effects of low income, or parental ill health, or substandard housing. Facilities for maintaining a social support network such as free efficient public transport (we used the example of the Croydon Tram) are most important for people with mobility problems for whatever reason. The accessible low-loading feature of the tram wouldn't make much difference to people who had cars, or could hop on and off of classic buses, but a huge difference to those with prams, in wheelchairs etc.

What turned out against our expectations not to increase resilience in the face of such adversities as poverty and the onset of chronic disease was the provision of services in the short term. David Blane, Gopal Netuveli and Zoe Hildon  found that older people whose health began to be affected tended to maintain good mental health if they had strong, long standing social networks. The key to this seems to have been that they were enabled to maintain a firm identity that did not revolve around the illness or disability. For example someone who developed arthritis would not become "the arthritic lady" but "our community festival organizer Barbara who has developed arthritis". Joining a patient support type group did not have the same effect. Margaret Whitehead and Krysia Canvin found, alarmingly, that even Sure Start Centers were sometimes shunned by the poorest mothers for fear that social services would remove their children.

How common was resilience in our studies? Using various different sources of data we concluded that in the face of a major adverse even such as the loss of a loved one, unemployment or the onset of a serious illness around 20% of people could "bounce back" fairly quickly. It is much harder to talk in this kind of way about resilience in the face of life long adversities.Because, as already pointed out, once an individual gets started on an adverse life course, the way social institutions work is not to try and provide "springboards" to remedy the problem, but to pile one adversity on top of another.