Friday 9 December 2016

Organising academic work

Recent exchanges about research careers have motivated me to update the blog I rather lightheartedly wrote some time ago about oganizing academic work.

I have long wondered why academic employment should not just obey the same regulations as any other kind of job. There is a downside to this. Academic tenure was designed to prevent authorities from sacking academics for having unpopular opinions. Unpopularity could arise from political issues of course. But other matters are at least as important. Freedom to criticize medical and drug research for example is vital for the safety of patients. But how genuine is job safety for "tenured" academics really these days? I really don't know (never had tenure) but reactions would be interesting.

Be that as it may, if academic departments started to run like commercial companies, you would have a situation where everyone's job depended on the firm getting enough business to pay the wage bill. Lets say it costs £150K per year to employ a lecturer adding together salary NI pension and "overheads". Anyone who has had to cost a research proposal can do this now. If we take students fees to be £9k per year then 20 undergraduates per lecturer is a generous allowance.

But what about research? Lets go back a bit to where it started to be essential for departments to get research money. I dont remember the dates at all well, it was a Thatcher plan. But first there was an attempt to divide universities into "R" and "T" (no prizes). This did not work. The next step depended on the fact that in pre Tatcher days money went to Universities from the government in the form of the "science vote". This money was notionally divided into 3rds: 1/3 for teaching, 1/3 for admin and 1/3 for research. Lecturers notionally spent their time in that way. The government took away the research 1/3rd sometime in the early 1990s (I think) and gave it to the Research Councils. So university departments had to win back 1/3 of their funding by applying for grants. I always thought the idea was the this would de facto create the R and T divide, with some universities cleaning up all the research money and thus having to do a lot less teaching (see my previous blog on this). These would then become the elite that of course the children of the rich would attend. Lecturers would not be dealing with hundreds of pile 'em' high students and would be at cutting edge of disciplines (they thought).

The form taken by the missing 1/3 was in the "overheads" paid on the wages of research assistants. 4-5 years ago these were increased from the 40% of salary costs (i.e. a % of salary) to fixed amounts set by the universities. So today if you are paid £25k or £60k your "overhead" at UCL, Essex or Imperial will be around £75K. It is a horrible thing to say but I have always feared this means that research assistants are basically "overhead fodder". The colleges get the same overheads regardless  of how much they pay you so it is in their interests to replace you with cheaper meat as they get the same money regardless.

Back to oganising research today. My vague notion has been that you need to calculate how much "business" you need to support your staff. This is what happens in many industries. It does mean that when business dries up a redundancy situation arises, as it did a few years ago in the National Centre for Social Research. But my idea was that this would be the same for everyone, with no special protection for certain jobs. The flip side would be that everyone joining a department would have to start developing their own portfolio of 'business'. They would be mentored and supported to do so. This is how the Institute for Fiscal Studies operates.

It is quite sobering to consider that it would take 200 students to support a group of 10 lecturers (this would leave some money to employ an admin expert). But this calculation becomes more important now that we are being told by senior university manager that in fact "research does not pay", apparently even that which pays the full "100% overheads". At least I have been told that.

Problem is, if we are all to be beavering away maximising the student experience of our students (not an unreasonable thing to do), where is new knowledge coming from?

Thursday 16 June 2016

A sociological approach to the relationship between research and policy.

There is so much about "impact" that takes what I think is a naive approach. Here is a rather old attempt I once made to apply a more sociological approach to the relationship between research and policy. Here is was applied to "Evidence Based Medicine " (EBM).

RELATIONSHIP BETWEEN RESEARCH AND POLICY: AN APPROACH FROM SCIENCE AND TECHNOLOGY STUDIES


Evidence and evaluation are taking a higher priority in the delivery of health services, and yet it seems to have been surprisingly difficult to establish which are the most effective treatments and to ensure that these are used. A whole industry now exists around “translational medicine”, which attempts to align research and clinical practice more closely. In such a context, it becomes increasingly problematic to operate with over-simplified notions about the relationship between research and policy. The idea that research takes place in an ivory tower motivated purely by intellectual curiosity, and is then applied to obvious technological, medical or social problems is widely held. This may indeed happen on some occasions, but in practice the great majority of changes in the designs of clinical procedures, artifacts or policies need to be understood as part of a more complex process. Problems in deciding what is the best practice and making sure this is carried out are to be expected rather than wondered at. This more complex view of the relationship between science and practice is reflected, for example, in the call by the US Supreme Court for judges to act as arbiters of the validity of the scientific (including medical) knowledge claimed by ‘expert witnesses’ (Solomon and Hackett 1996). Rather than the scientist being seen as the final arbiter of what is ‘fact’, the claim to truth is seen as an adversarial one.

The late 1970s and 1980s saw the flowering of a number of related fields of inquiry which made it possible to gain a more detailed understanding of the ways in which research programmes are situated within policy-making contexts: in particular the sociology of social problems (e.g. Spector and Kitsuse 1979, Gusfield 1981, Aronson 1982, Pfohl 1985), the social histories of science (e.g. Shapin and Schaffer 1985) and medicine (e.g. Treacher and Wright 1982),  social policy (Lindblom and Cohen 1979, Prince 1983) as well as in the sociology of science and technology (Barnes and Shapin 1979,  Latour and Woolgar 1979/1986, Mackenzie 1981, Gilbert and Mulkay 1984, Collins 1985, to name but a very few and for an overview of debates see Pickering 1992). The term ‘socio-technical system’ is sometimes used to refer to the networks of human, biological and physical actors involved (Bijker et al. 1987). This paper suggests that an approach based on a combination of these ideas with historical work (Macleod 1967, 1974, 1988, 1993; Macdonagh 1958, 1965) may prove helpful in understanding the changing relationship between the production and utilisation of scientific knowledge.


Policy and research: a conventional view

A clear and succinct version of the conventional ‘empiricist’ model of the relationship between research and policy is given by Martin Bulmer in his Uses of Social Research:

governments, firms and other organisations need adequate information on which to base policies which they ...implement. The task of ... research is to provide as precise, reliable and generalisable factual information as possible ... This information, when fed into the policy-making process, will enable policy-makers ... to reach the best decision on the basis of the information available (Bulmer 1982: p. 31).

In the health field, a similarly clear summary of the nature of the relationship is given by Elwood and Gallagher (1984):
the role of social policy ... is to effect change in the search for improvement, and the role of science ... is to provide the firmest possible empirical grounds for change ... (317). It is the role of policy, in their view to determine which questions are important enough to justify devoting scientific resources to. Danger arises, however, if policy actors meddle in the interpretation to data, or if scientists make statements which go beyond its limitations.

The paradigm for the use of research in such a problem solving process in the UK is the so-called ‘Rothschild’ model, in which government or  some other client sets the problem and the researcher produces the answer as it were ‘to order’. This paradigm finds considerable favour in policy making circles, and was imposed during the 1980s on government funded research groups as a way to increase the cost effectiveness of money spent on science. However, this model is not found in practice to give a very accurate picture of how policy makers and researchers interact (Feldman and March 1981; Rein 1983; Bulmer 1982: 48).

Models of the research-policy relatinship
There are a number of alternative approaches to understanding the research-policy relationship (both descriptive and prescriptive), for example:

Linear model
In this model, little attention is paid to the ways in which, or the reasons for this new knowledge may arise. Once in existence, knowledge is regarded as leading to useful policy applications in similarly unexplicated ways. This picture fits the examples of the contribution of physics to the development of micro-processor technology, or of molecular biology to genetic engineering. Implicitly, it assumes that science follows its own agenda but more or less automatically results in technological advance and policy change.

Enlightenment model
In this paradigm, new knowledge is seen to enter the public domain via a more indirect process involving “quality” media, opinion leaders, pressure groups etc. rather than direct contact between scientists and practitioners. It has been described as the provision by scientific information of an ‘intellectual setting of concepts, propositions, orientations and empirical generalisations that will expand the policy makers frame of reference’ (Davidson and White 1988: 2, Gandy et al. 1983). It was intended perhaps more as a model of the relationship between social research and social policy as opposed to science and technology (Janowitz 1972, Weiss 1977, 1979, Wagenaar et al. 1982). However, it is relevant in the case of medicine and health care, where the influence of pressure groups and the media is known to be considerable (Elwood and Gallacher 1984, Gabe et al. 1988, 1991).

Political model
This might be regarded as a version of the Rothschild model discussed above which is more descriptive than prescriptive. It holds that in fact, customers for research are not motivated by objective evaluation of problems. Rather, the policy maker or practitioner, acting as the customer, commissions, not just any researcher, or the most skilled, but one whom they know in advance will produce a certain answer (Caplan 1976, Tizard 1990).  The outcomes of political uses of research will not, of course, appear to be so. However, it will often be found that ‘even where debate includes specific reference to research findings, the rhetoric of empiricism may merely constitute a cosmetic 'rationality'’ (Davidson and White 1988: 6). This approach has some advantages over the first two. The ‘origin of the problem’ is made more explicit, as is the articulation between researcher and practitioner or policy maker. Customers are regarded as actors with interests, and in order to advance these they need to be able to appeal to the right kind of facts. The researcher is given the job of producing these. It is an awareness of this process that has led to the judicial treatment of expert witness testimony in the USA. Experts are no longer to be treated as authorities, but as partisans.

Partisan mutual adjustment model
This model acknowledges that scientists as well as policy makers live in a real world where careers and ambitions play a role the negotiation of both technical and political decisions (Lindblom 1965). Actors on all sides are ‘partisans’ for their own interests, but will adjust their claims in the light of the balance of forces and the desirability of alliances with other groups. It is not only the customer ordering the ‘Black Magic rather than the Diary Box’ (Bartley 1992), but the scientist/contractor who can see on which side her bread is buttered, and deliver accordingly (Hanmer and Leonard 1984). It must be stressed that those who advance variants of this model of the research-policy relationship are not claiming that researchers cold-bloodedly fiddle results to please their patrons. What we learn from the sociology of science (Hacking 1992) is that research itself is such a heterogenous and indeterminate process that such demands are absorbed within its normal practice.


An alternative approach: policy issues as social problems
This paper will argue that it is not the mere existence of research findings, or even the opinion of the academic community as to their quality, which  ensures the application of new knowledge in practice and policy. It will argue for the relevance of advances in the sociology of science and technology in France and Britain which have emphasised the socially located nature of science. These studies have shown that research takes place within a context of political and pressure group negotiation, in which scientists enter the policy arena as part of a form of entrepreneurialism in order to gain support for their work. They show how central this entrepreneurialism is to the doing of successful science,and describe the skilful work of a social nature in which scientists must engage even before the work of fact-creation can begin.

Pressure groups and scarce resources
In this model, the relationship between research and policy may be regarded as taking place within ongoing ‘social problem processes’ -- claims and counter claims on the resources of governments and other major resource holders (Schattschneider 1960; Ryan 1978; Richardson and Jordan 1979). These processes take place within ‘issue communities’ comprising members of relevant pressure groups and professions and journalists,  as well as  ‘political administrators’ (as Heclo and Wildavsky [1974] term both politicians and civil servants) and the scientists themselves whom we might regard as the principal actors.

The notion of the social problem process is derived from the sociology of deviance. It refers to the process through which one or more pressure groups set out ‘moral claims’ to the effect that a certain phenomenon or condition is objectionable, ‘a problem’ and ‘something must be done’. This is regarded as a claim on scarce resources (of businesses, companies, or governments), and therefore likely to be opposed (Manning 1985). Claims proceed through a set of stages in which firms or governments may respond by denying the problem, institute research to show that it does not exist, or is minor (Downs 1973, Spector and Kitsuse 1979, Manning 1985).

‘Factual claims’ are also made as part of the social problem process. These may include debates over the size, strength and reality of the problem, as well as the effectiveness of proposed solutions. It is into this fact-claiming process that research comes into play most obviously: for example the debates on leukaemia clusters, the effects of passive smoking or BSE. Social problems can get lost in ‘loops’ where debate goes round in inconclusive circles, either in the form of irresoluble disputes between scientists, or in more institutionalised forms such as Royal Commissions.

The other groups involved in social-problem claims and counter claims are those  Heclo and Wildavsky call ‘political administrators’: politicians and civil servants. Put very crudely and briefly, both politicians and officials in ‘spending’ departments of state (i.e. those other than the Treasuries) have an interest in enlarging their spheres of action. It is a benign contradiction within modern government that while some careers and political reputations may be made by restricting public spending, more are made by enlarging it: ‘policy-makers, in office for a relatively short span of time, are anxious to 'make a mark' on policy and hence on society; ...’ (Davidson and White 1988: 8; Walker 1987) For the politicians, this is what gets them noticed as significant reformers; for the officials, this is quite simply what keeps them in work (Prince 1983). As Hall et al. put it: ‘The same hierarchy that provides the basis for regulating issues and initiating policies is also a career structure. It represents rewards, satisfaction, opportunity, disappointment ... for ...individuals’ (Hall et al. 1978: 66).

Role of the media
From sociological studies of the media comes the idea that groups involved in social problem processes make strategic use of journalists. In one view, this takes the form of ‘information subsidy’ -- the journalist receives a ‘pre-digested’ version of a complex story (Gandy 1982). The journalist is thereby ‘subsidized’ in terms of the amount of effort involved in getting the story, and in return, the story is more likely to ‘make the papers’, and gain access to the agenda for public debate. Information subsidy may come from any of the parties in the issue community. Very commonly it comes from scientists or professionals who are also participating in pressure group activity.

All participants in a social-problem process are engaged in one or other form of struggle for career progression. The individual must ‘have a bright idea and fight for it’ (Bartley 1992), and then invest the resulting career capital in a skilful manner. Pressure groups are often a good source of bright ideas, as they are of new stories to fill newspapers. This leads in many cases to a symbiosis between what might appear to be conflicting interests. In order to gain the upper hand in a scientific competition, scientists may risk being ‘misrepresented’ or ‘sensationalised’ by seeking out the attention of journalists. This often takes place at strategic moments, such as when decisions on funding are being taken, or when there is a threat to close down a research facility (Silverstein 1981). Similarly, in order to advance her career, an ambitious official needs to be one up on her peers in the bureaucracy. One way to do this is to have good contacts in pressure groups and present chosen ideas to politicians as potential reforms.  Covert co-operation may take place in various ways between journalists and officials on the one hand and pressure groups or ‘deviant’ scientists on the other.

Marketing expertise
Professional groups are an essential part of the social problem process. In many cases, a whole process is driven by one or more ‘orphan professions’, who seek to have some condition/s defined as a social problem in order to offer themselves as (at least part of) the answer. I have called this ‘sub-professional entrepreneurialism’ (Bartley 1992). This may happen after the end of previous social-problem processes which leave expert groups high and dry. For example, several notable reforms in the area of public health took place after the end of the Napoleonic Wars, a time when overcrowding in the professions was combined with de-mobilisation of military officers. A whole social layer was thereby available for recruitment into the growing inspectorates produced by a series of reforms such as the Public Health and Passenger Acts (Lewis 1965; Reader 1966; Macdonagh 1958, 1961, Novak 1972, Johnson 1972). This is another way of looking at what American sociologists of deviance have termed ‘moral entrepreneurialism’, except that it makes more explicit the fact that what is being marketed is the professional group itself.

Networks of knowledge
Scientists also engage in similar activity, which I will call  ‘sub-disciplinary entrepreneurialism’. Whereas the sociology of deviance has paid relatively little attention to the activities of scientists within social problem processes, their activities are extensively described and analysed in the social study of knowledge (SSK) and of science and technology (STS). STS forms something of a radical departure in the understanding of science. One would not normally imagine that the ethos of science is compatible with entrepreneurial activity. And this was indeed the view taken in earlier sociological approaches such as that of Merton. For Merton, ‘good science’ needed no explanation other than rationality and the true state of nature in the external world. Social or political factors, in his view, only operated to produce conditions under which irrationality prevailed, and the progress of science was blocked (Merton 1973).

Later studies, however, indicated that successful science is not necessarily a less ‘social’ phenomenon than unsuccessful science. STS aims for an understanding of the entrepreneurial skills of successful scientists -- those whose factual claims are accepted as ‘knowledge’. Successful knowledge production depends, according to this perspective, on their ability to engage in what Callon and Latour have called ‘translation’ (Callon 1981, 1986; Callon and Law 1982; Latour 1987). Scientists must undertake extensive network building (the approach of these French scholars is sometimes referred to as ‘actor-network theory’), for example convincing funders that it is ‘in their interest’ to support the work which the scientists wish to do, convincing journal editors to publish their papers, convincing other scientists to accept their results. This work extends far beyond the laboratory or research unit itself. As Latour puts it

If you get inside a laboratory, you see no public relations, no politics, no ethical problems,  no class sturggle, no lawyers; you see science isolated from society. But this isolation exists only insofar as other scientists are constantly busy recruiting investors, interesting and convincing people. (1987: p. 156)

These activities may include enrolling a relevant sub-profession with a parallel interest in claiming that a serious problem will be solved or avoided, for example, by an alliance with a pressure group, as is found in discoveries of disease clusters or controversial new treatments for diseases suffered by articulate communities (Nicolson and McLaughlin 1988; see also Callon et al 1986).

The activities which take place within actor-networks affect the products of scientific and technological activity at the most basic level (Mackenzie 1981), although this process is never described in accounts of scientific work. According to Latour, one of the most important activities in science is the artful elimination of all traces of the social or the political from the final reports of findings. Like all other groups involved in social-problem processes, scientists are using them to further their claim on resources. In Pickering's words ‘doing science is real work and ... real work requires resources for its accomplishment’ (Pickering 1992:2). Resources are claimed, in this case, by offering accounts of events which are  ‘objective’, and neutral, that is, unaffected by social or political interests:

We need a court of appeal which is maintained outside of political debate, in order to be able to put an end to political dissent. Scientists and politicians agree on this necessity. That is why, despite being shown by every scientific controversy that scientific proof is negotiated, we cling to the opposite idea for all our lives are worth (Latour 1981).

Hence, to go back to the example which opened this paper, the sense of shock we feel at finding that a US court has demanded that scientific evidence in trials no longer be treated as non-adversarial. Science's claim to provide authoritative rather than partisan accounts is seen to crumble.


Fact creation in the social-problem process
To recapitulate the argument of this paper: In the social problem process, typically an entrepreneurial scientist or professional forms an alliance with a pressure group organised around a relevant issue. What makes it relevant is that the issue could shape up to be the sort of ‘problem’ to which the entrepreneurs might offer their skills as ‘the solution’. They put the issue on the public agenda by giving subsidized information to the media. Government then produces an ‘official response’ which in turn may well create business for another group of professionals and/or scientists .

Out of this process emerge a series of factual claims, emanating from both sides. Some of these claims may attain the status of accepted fact. But this will only happen if there is a position which is agreeable to the majority of participants in the issue community. Participants will agree to a factual claim if it suits their interests (they will then be ‘enrolled’ in Latour's terms). The more widespread and diverse the issue community (which may perhaps be regarded as another way to conceptualise an actor network, except it gives more primacy to actors from the policy making sphere) the harder the fact will be. A greater amount of work of enrolment and ‘interessement’ will have had to be done, and a statement which is found useful by a wide variety of highly diverse groups is going to find more defenders than one which is more local and specific.  This is the key to the actor-network theorists’ paradoxical claim that hard facts are not less social but more so (Latour 1987).

Once we start to look at the research-policy relationship as merely a facet of social negotiation over markets for expertise and, ultimately, the allocation of scarce resources, it is far easier to understand why clear instances of research influencing practice or policy are so hard to come by. For one thing, as indicated in several of the models described above, it is questionable whether the direction of influence is very often from research to policy at all. Several commentators see it as quite the opposite. The actor-network approach takes a slightly more complex view, that facts themselves emerge from accomplishment of an alliance of groups both inside and outside the scientific community:

technical objects must be seen as a result of the shaping of many associated and heterogenous elements. They will be as durable as these associations, neither more nor less. Therefore we cannot describe technical objects without describing the actor-worlds that shape them, in all their diversity and scope (Callon et al. 1986).

Social problem processes may end in a wide variety of different ways. Perhaps the most unusual of these is that one side changes its view ‘in the light of evidence’, leading to visible change with an identifiable research input. Key groups may lose interest and move on to more promising terrain; some sort of institutionalised conflict may result in the setting up of new bodies (for example in the UK the ‘Committees on Medical Aspects’ of various environmental factors such as food quality (COMA), and air quality (COMARE); in the USA the National Insisitute of Occupational Safety and health (NIOSH) and the Food and Drug Administration (FDA) are examples).

Practical implications for health services research
This view of science (both pure and applied) as taking place within social-problem processes may have implications for present concerns relating to the dissemination of innovations and evidence based medicine. Perhaps it is most consistent to regard EBM itself as part of a social problem process, one which combines concerns with clinical efficiency, the cost of modern medicine and wider economic management.  In this process, claims and counter claims are made as to the necessity for health service rationing. Knowledge claims relating to the effectiveness of various treatments therefore take on a higher profile than would previously have been the case. This is agreeable to certain subprofessional and subdisciplinary groups, who can see their relative importance within the ‘medical-political complex’ growing at the expense of previously more favoured groups (primarily the clinical specialists). For example, major funding was made available to a centre for the analysis and dissemination of information on the effectiveness of different procedures in pregnancy and labour, and later for all drugs and procedures (the ‘Cochrane Centre’) and for an NHS ‘Centre for Reviews and Dissemination’ which gathers and evaluates studies on a wide range of clinical and non-clinical issues and disseminates its findings on the ‘best practice’.

The technical capacity to estimate effectiveness of treatments has long been available, and advocated by epidemiologists and health service researchers. But it took an organizational upheaval in the UK National Health Service to bring about a situation where such claims were given any credence. Crudely, reforms of the UK health service have given far more power to health service managers, whose task is to see that treatment objectives are fulfilled within the limits of their budgets. Within this organisational structure, epidemiologists and medical statisticians' knowledge claims are valued for their usefulness when set against the claims of clinicians and the demands of individual patients. Managers can use these more objective estimates of effectiveness enter into the implementation and justification of rationing, which ensures a continuing market for the expertise of those who provide them.

Of course, the picture is nothing like as simple as this in practice. An STS perspective would see it as problematic to approach evidence based medicine  as a series of attempts to ‘disseminate the facts about most effective practice’ beyond their original discoverers. Latour has criticised a simple ‘diffusionist model’ of the translation of knowledge into practice. Diffusionists ‘simply add passive social groups to the picture’ who may ‘resist’ the innovation. We should realise, he insists, that ‘ the obedient behaviour of people is what turns the claims into facts...’. In the STS model of the science-policy relationship facts themselves emerge from knowledge claims when these claims are adopted by wider networks of people with greater diversity of interests and concerns. The successful fact builder is one who makes the most skilful efforts to discover and enrol the interests of new groups.

 We may also have to acknowledge that there is no single moment when a fact is accepted and becomes part of an unchanging knowledge base. The very nature of the ‘fact’ may undergo change when new groups become involved in the process. Hence the chronic indefiniteness of much health advice (and other forms of medical knowledge) which gives rise to reportedly widespread public cynicism and the failure of health education campaigns. The complex and diverse combinations of interests and concerns within the issue communities within which research on many health issues takes place means that it is extremely difficult to stabilize a wide enough network who accept any given claim.

Perhaps such complexity is endemic to a situation in which research and professional activity are so closely related, and done by the same people, as is the case in medicine. In such a case, professional interests intervene between factual claims and ‘end users’ in a different manner than would be found in the case of, say, physics and electrical engineering. In law, a ‘pure’ profession not hybridized with ‘science’, all claims are regarded as essentially adversarial. Medicine is in the difficult position of combining professional with scientific modes of legitimation.

There is in fact rather an interesting parallel between some of the  discourse on Evidence Based Medicine and Latour's history of Pasteur (Latour 1984). At first, physicians in late 19th century France were not interested in Pasteur's ideas. The private physicians' journal Concours Medicale in 1895 described itself in September 1894 as having ‘voluntarily abstained from speaking of microbes, rodlike cells, comma bacilli and cocci ... knowing that practitioners, its usual readers, would not care very much for that overspeculative ... hodge-podge’ (quoted in Latour 1987: 132). The conventional explanations for this included the greater popularity (at that time) of public hygiene, the corporatist interests of medicine, the backward or reactionary nature of physicians' beliefs. According to this account, without these resistances  the ideas of Pasteur would have swept through the practice of medicine by virtue of the momentum bestowed by their truth. Then, after 1894, the tide turned and doctors accepted Pasteur's ideas. In conventional histories of ideas this change is attributed to change in the attitudes of doctors. What this overlooks, Latour claims, is the long and patient work of negotiation between Pasteur and the physicians to arrive at an altogether different technical translation of the ideas.

How could the private personal physicians agree to Pasteur's ideas of contagion? If they accepted the notion of contagion this would have had drastic results for their confidential relationship to patients - not to speak of the consequences of contagion for converting the trusted physician into a potential vector for disease himself. Why should they be enthusiastic about prevention? Unlike army doctors, who were salaried civil servants and the first group to accept microbiological ideas, the physician's market depended on at least some people needing to be cured. The acceptance of microbiology by private physicians came only after Pasteur had developed a diphtheria vaccine which had some curative effect instead of a rabies vaccine which had only a preventive one.  The expression of the scientific theory which got it accepted by the wider network of the medical profession was therefore one more congruent with the role in which doctors felt comfortable: ‘As soon as they were able to go on doing what they had been doing, the same physicians who had been called narrow and incompetent immediately got moving ...’ (Latour 1987: 127)

This historical account may offer some help when asking why it is so difficult to arrive at consensus on the effectiveness and cost-effectiveness of treatments and why, even when this appears established, it is still hard to ‘bring practitioners in line’ with the findings. In this paper it is not possible to pursue these questions in any detail -- the aim here has been to suggest a social-historical framework within which case studies might usefully be. We can perhaps get as far as asking: in order to carry out an analysis of EBM within the perspective, what questions should be asked? The orthodox method might be to attempt to understand the resistance of [some] doctors to new knowledge and techniques. This then runs into the problem that in many fields, such as prevention of heart disease (a disease which accounts for almost half of all premature male deaths) definitions of what counts as new knowledge constantly change. In the perspective outlined here, we would be more inclined to start by asking what are the important groups involved in social-problem claims making (‘heart disease is too prevalent in our society’), moral claims (‘something has to be done about it’), factual claims (‘we know what should be done’) and subprofessional entrepreneurialism (‘we are the people to do it’). Who is making these claims? Why now?  What are their objectives? What alliances is each group pursuing? And how is the pursuit of alliances (‘network building’) shaping the nature of the claims that are made over the course of the process as a whole? This, of course, includes the factual claims about how nature, (the body, the mind, the microbe, the molecule etc.) really is, how the machines or techniques actually do work and so on as well as claims about what the groups themselves can really do and moral claims about how the world ought to be.

CONCLUSION
The growth in popularity of Evidence Based Medicine is a fascinating phenomenon which may present social and historical studies of science with new challenges. According to Latour (1987) the two most important sets of alliances made by scientists have been those with the military and those with medicine, because ‘in both cases money is no object’ (Latour 1987: 172). This is a statement which will cause some astonishment to readers in the late 1990s. Although some of the most powerful pressure groups in Western countries are still those which protect the interests of the medical profession, these countries have all been involved in efforts to cut their medical budgets. However, just as the actor-network theory would lead us to expect, these efforts to control expenditure have duly broken open many of the factual claims made in the name of medical science. It is a tribute to the power of medicine's previous alliances that so many of its factual claims have been accepted so readily, without the need for alliance with other groups such as statisticians or economists. How will new facts about the effectiveness of treatments emerge? The theory would predict that this will only happen when a new alliance of forces has been settled. So it is not a case of finding out which facts are the real ones and then trying to commit clinicians to their use: rather it is only when there is a new settlement between clinicians, basic scientists, political administrators and whatever other significant interest groups enter the field that new knowledge about clinical effectiveness will become established.


Thursday 5 May 2016

Mayhew and Smith “An investigation into inequalities in adult lifespan”

The Introduction to this paper states that “the gap in life expectancy between the shortest and longest lived is widening for the first time since the late 1870s”. The widening is attributed to the fact that lifestyles are the main reason for the trend because “Men in lower (sic) socio-economic groups are the most lkely to make damaging life-style choices”
What are the data used to reach these conclusions? Wisely, the authors only start to consider inequlity from the age of 30. Infant mortality would be pretty hard to attribute to foolish choices of health behaviour. The measure used is an Inter-Percentile Range (or IPR) which compares the average age at death in the 5% of people who die youngest with that in the 10% of people who are oldest at the time of their deaths. This is done separately for men and women  To give examples from their Table 1: in 1879 the average age of men who lived longest was 85.6 years compared to 39.7 years in the 10% that lived the shortest time, while in 2010 it was 95.7 years for the longest lived 5% and 62.4 years in the shortest-lived 10%. This gives an “inequality gap” of 44.9 years in 1879 and 33.3 years in 2010.
The argument that “inequality” is widening is based on a comparison of the difference between the top 5% and bottom 10% between 1879 and 1939 (where it decreased by 7.7 years for men) and the same difference between 1950 and 2010 (during which time it decreased by only 1.2 years).
This is a very interesting use of the Mortality Database, a rich source. The first thing I noticed was the huge difference between the changes over time in the top 5% and those in the bottom 10%. From 84.6 years to 97.5 is a hefty one, but a lot less than the difference between 39.7 and 62.4. The longest lived 5% gained around 13 years while the 10% who died youngest gained around 22 years. The paper also contains equally interesting data for women.
This analysis complements to the well-known increase in health inequality that motivated the Black Report, the Acheson Report, the work of the Marmot review and similar work. We know that health inequality increased steadily from the 1930s to 2001. But what these reports mean by “inequality” is totally different to what Mayhew and Smith mean. They are talking purely about the average age at death in groups defined according to whether the members were among the shortest or longest lived at any given period. The Black report and its successors were talking about groups based on social class. Recent reports from the Marmot Review group define groups in terms of the level of deprivation in a given residential area. In Mayhew and Smith’s analysis we have no idea about the income working conditions, residential conditions or occupation of anyone. In fact the idea that their analysis says something about health inequality requires us to assume already that a longer life has something to do with income or other measures of socio-economic position.

Similar work was done in the 1980s using a Gini coefficient for age-at-death. This work as far as I remember did not exclude deaths in childhood. You can think of the Gini coefficient as a kind of variance around the mean age at death. In the 19th century and early 20th, many more infants and children died, giving a far wider range of ages at which lots of deaths occurred. So no surprisingly the variance in age-at-death fell sharply over the 20th century. The authors of these studies (forgotten who they were ) argued (unlike Mayhew and Smith) that this was inconsistent with studies showing increasing health inequality. What makes the 2 types of study similar is that in both cases we actually have no idea about the socio-economic conditions of the people who were living longer or shorter lives. So in neither case is it possible to attribute any social cause to the demographic changes. 

Wednesday 30 March 2016

Why it is hard to tell if health inequality in UK is increasing or decreasing

One of the things I was most annoyed by while revising my book on health inequality was the discontinuity in some of the sources of data. Perhaps the most important of these is the interruption to the series on class differences in mortality that goes back to 1931 in England and Wales. I had thought it would be an easy matter. In the 1st edition of the book there is a table that shows how social class differences in mortality during working age steadily increased up to 1991. In 1931, mortality in working age men in the most advantaged social class made up of professionals and managers was around 10% less than the average for all working age men and mortality in the least advantaged class made up of nonskilled manual jobs was about 11% higher than average. By 1991, the equivalent figures were 34% lower for the most advantaged class and 89% (yes, that is not a typo) higher for the least advantaged. All of these figures are adjusted to take account of the fact that the different social classes may be made up of people with different ages. For example, men may be older by the time they get into management (although if this influenced the result it would in fact be the other way around as older people have higher mortality).

So I thought, no problem, lets look up what happened to these figures in 2001 and maybe 2011 as well. However, there had been 2 large changes to the way the figures are calculated.

The first and least problematic of these is that the way in which it is decided which occupations go into which  social classes has been clarified and put on a much more scientific basis. If you want to look more closely into this the web site http://tinyurl.com/h3yxh34 might help. Although this "class schema" , called the NS-SEC, is not the same as the "Registrar General's Social Classes" (RGSC) that were used betwee 1931 and 1991, we know from lots of work that different health measures do vary by NS-SEC in a very similar manner to RGSC. And because there is a clear logic to why occupations are put in the different SECs, it means that the study of health inequality using this measure should get a lot more scientific than it was before.

What is more infuriating is where the numbers needed for the calculation of class differences in mortality now come from.

Between 1931 and 1991, the numbers came from 2 sources. To get a rate of mortality you need a numerator (the number of deaths in a social class) and a denominator (the numbers of people in that social class). Up to 1991, the numerator came from death certificates, because people's death certificate includes what their job is. And the denominator came from the Census, because at the Census you know how many people in each occupation there are in the country, add up the appropriate occupations into the social classes, and Bob's your uncle. The limitation here is that social class differences in mortality can only be calculated every 10 years when there is a Census. This way to calculate health inequality is called the "Unlinked method".

However, in the 1980s there was a bit of a panic about the way in which class differences in mortality had been rising to much. Some people guessed that the unlinked method might give a biased impression. What if people's relatives told the Registrar of Deaths a higher status job than the one they really had before they died? In fact, if this had been happening (think about it) it would have reduced  health inequality, not increased it. But never mind, the outcome was the most wonderful data set. The ONS Longitudinal Study linked 1% of the population at each Census of England and Wales to future events, such as mortality. So instead of the numerator and denominator being take from different data sources, we could now calculate class specific rates of mortality from data from the same people: they gave their occupation to the Census and this could be linked to the information on their death certificate. But this actually led to some pretty big problems of its own.

I am just beginning to realize what a complicated topic this is! It stands as an example of the fact that "Big Data" may entail a lot more thought than some people seem to realize.

To cut a long story short, eventually the estimate of the size of class differences in mortality came to be taken from yet another set of numbers. This time the numerator was taken from death certificates again (with the occupation of the deceased person taken from the certificate) and the denominator was taken from something called the Labor Force Survey (LFS). There are some advantages to this. The LFS is done every year (unlike the Census). Although it does not count everyone in England and Wales, it is a large survey and the numbers can be multiplied ("grossed up") to give an idea of how many people are in each occupation in each year. BUT the LFS will have non-response. Unlike the Census, it is no obligatory for a person to take part in the LFS. So we have now moved from a denominator taken from an obligatory census of everyone in England and Wales, to one taken from an annual survey of (I think) around 40,000 people, some of whom may refuse to take part.

On the one had, this should not necessarily lead to an under-estimate of social class differences in mortality, because non-responders to surveys tend to be people living in more adverse conditions. Since everyone gets a death certificate, it is most likely that more disadvantaged people are more likely to have their death recorded than their occupation. Which would tend to give higher death rates in more disadvantaged groups. In addition, the way the measure is calculated has changed. This is probably a sensible change,

However this may be, the National Statistics office for England and Wales (Scotland and N Irelnd have thier own organisations) have given us as near as possible an estimation of what has happened to social class differences in mortality since 1991. Instead of the Standardised Mortality ratio used between 1931 and 1991, which compares the mortality rate in all social classes to a notional "average", we now have a measure that just gives the mortality rate per 100,000 adjusted to take account of age. ONS usefully goes back to 1971 and calculates the mortality rates in this way, then presenting a ratio of the rates in the most versus the least advantaged social classes. Having done this, what we see is that in 1971 this ratio was 1.8 (working age men in the least advantaged class were 80% more likely to die than those in the most advantaged), while in 2001 and 2010 the ratio was 28, (men in the least advantaged class were 180% more likely to die). Bear in mind that this is a comparison between the very best and the very worst employment conditions, not between the best and the average or the worst and the average. So it is bound to look rather a large difference. Which it is.

What would the comparison look like if we could go back to 1931 or even 1961? There is just no way of knowing.