Latest Posts

Play: A Sociological Approach

In their recent essay ‘Heterophony and Hyper-Responsibility’, Åkerstrøm Anderson and Knudsen present a sociological approach to play that is worth recording:


Gregory Bateson suggests that play is a special form of communication in which the message is that ‘these actions in which we now engage do not denote what those actions for which they stand would denote’ (2000: 180). When children play-fight, they continually draw up a distinction between play-fighting and fighting. Thereby, they establish that a marked strike signifies the strike but does not signify that which a strike would signify. Bateson’s final and more precise formulation is, ‘These actions in which we now engage do not denote what would be denoted by those actions which these actions denote’ (2000: 180). Thus, play represents a distinct communicative doubling machine. Play doubles the world so that we have a world of play and a real world, and the doubling takes place on the side of the play. Dirk Baecker (1999: 103) formulates it in this way:

‘In play, socialness is constituted by ways of reflection onto itself as the other side of itself. In play, socialness is experienced as what it is, namely as contingent, roughly meaning that it is neither necessary nor impossible, or again, given yet changeable. Play in general reveals the form of the social by which the play infects the world.’

Play represents a communicative sociality, which is characterised by its doubling of this sociality so that the contingency of the social reality becomes visible. In play, certain rules exist. Hans Georg Gadamer says that in play you forget yourself, you dedicate yourself totally to the play process (1985: 92).


Åkerstrøm Anderson, N. and H. Knudsen ‘Heterophony and Hyper-Responsibility’ in Knudsen, M. (ed.) Systems Theory and the Sociology of Health and Illness: Observing healthcare (Abingdon: Routledge, 2015) p. 90.
Baecker, D. (1999) ‘The Form Game’, in D. Baecker (ed.), Problems of Form, Calif.: Stanford University Press, pp. 99– 107.
Bateson, G. (2000) Steps to an Ecology of Mind: Collected Essays in Anthropology, Psychiatry, Evolution, and Epistemology, Chicago, Ill.: University of Chicago Press.
Gadamer, H.G. (1985) Truth and Method, New York: The Crossroad Publishing Company.


Contingency and Double Contingency

Here is a useful and brief description by Luhmann of the core concepts of ‘contingency’ and ‘double contingency’, taken from an essay entitled ‘Generalized Media and the Problem of Contingency’.


If I understand correctly the English term contingency in its present use, it has its core meaning in dependency and draws the attention primarily to the fact that the cause on which something depends performs itself a selection from other possibilities so that the contingent fact comes about in a somewhat chancy, accidental way. If we look into the theological and philosophical tradition of the term, our findings confirm this interpretation.[1] In scholastic philosophy the term contingens belonged to the theory of modal forms. Used to translate the Aristotelian ευδεχοϕυου (=possible) and mixed up with the classical Latin sense of accidens or eveniens, it was narrowed down to signify a special type of possibility i.e. ‘possibility not to be’.[2] This ‘possibility not to be’ was attributed to a world created by the unlimited will of God. Only a contingent world, as the nominalistic scholastics found out, could be conceived as created by God. Contingens was used in a double sense as a general category of modal logic and as a term which includes causal selection as the factor which decides between being and not being.[3] Contingency of the world came to be a corollary of the absoluteness of God. Contingency, therefore, also meant dependency on His creation or the visibility of His free will in His Creation This led the pious to look to God for the elimination of infinite other possibilities and for a guarantee that the selected world was the best of all possible worlds.

This tradition was, of course, known in its results to Descartes and to Hobbes. Finding the transcendent God liberated from any essential commitment to a pre-existing ‘cosmic’ order of nature or ideas they drew on consequences for the individual and for the social order. They secularized the problem of selectivity. Descartes transformed it into a theory of individual and cognitive processes and Hobbes into one of social-political and normative processes. Not yet sociologists, they did not reflect on the interdependence of individual and social processes; nor could they pay sufficient attention to the fact that the problem of contingent selection became urgently relevant in connection with evolutionary changes in the social system of society.[4] Nevertheless, this background of conceptual history helps to see why, and in what sense, contingency is inherent in interaction.

Summarizing what we have discussed so far, contingency means that being depends on selection which, in turn, implies the possibility of not being and the being of other possibilities.[5] A fact is contingent when seen as a selection from other possibilities despite a selection. Implying a potential for negation and the visibility of other possibilities, the concept can be applied only to the meaning of subjective experience and action. This does not limit the range of application and does not, of course, mean that contingency is accessible only by ‘introspection’. Every fact may be seen as contingent: the objective world,[6] the concrete self with its biography, conscious life,[7] decisions and expectations and other persons with their experiences and choices. Contingency is a universal, but it nevertheless presupposes a subjective point of view. It can be applied to all facts but not independently of a subjective potential to negate and conceive other possibilities.


Double Contingency
We are now prepared to analyze the special problem of double contingency in interaction. Contingency does not double by erasing the twofold dependence;[8] nor does double contingency signify two contingencies in the sense of a simple addition; nor does it mean interdependence in the sense that ego depends on alter and vice versa, the subjective point of view coming in later and only as an interpretation of this interdependence.[9] The doubling comprehends the whole structure: the generalized potential to conceive of facts as selections implying negations, to negate these negations and to reconstruct other possibilities. Double contingency is ‘double negation virtuelle’[10] meaning that possibilities of negation can be retained and stabilized as reciprocally not actualized but implied possibilities. The doubling of contingencies is possible because this potential is located in subjects, and subjects can experience other subjects. The doubling does not double the world and does not construct two separate realms of contingency. The potential is universal for each subject and is an aspect of the meaningful constitutions of this encompassing world so that ego has to identify alter as being another subject in his world, and vice versa. Double contingency rests on the fact that contingency is subjective and universal at once.

[…] The point is that we have to conceive of meaningfully organized selectivity as the specific human condition and that we need the outlined conceptual framework to understand evolutionary gains as higher forms of problematization and organization of selective processes. Double contingency is not simply a problematic fact inherent in the nature of interaction. It has the double aspect of high achievement and high risks. It makes the selectivity of other subjects selectively available[11] at increasing risks. The selection chain may be broken and expectations disappointed – a risk which, then, can become a specific (secondary) problem and be solved by specialized mechanisms.


[1] English speakers may consult Philotheus Boehner, ‘The Tractatus de praesdestinatione et de praescientia Dei et de futuris contingentibus of William Ockham St. Bonaventura N. 4 (1945) pp. 41ff; B. Wright, ‘Necessary and Contingent Being in Science and technology. Thomas’, The New Scholasticism, 25 (1951) pp. 439-466; Edmund F. Byrne, Probability and Opinion (Den Haag: M. Nijhoff, 1968) pp. 188ff.
[2] The negation refers, as Leibniz did make clear, not to the possibility itself – this would mean impossibility – but to the being whose possibility is stated. Contingens is a positive statement about the possibility of negative being.
[3] See Gerard Smith, ‘Avicenna and the Possiblies’, The New Scholasticism, 17 (1943) pp. 340-357; Celestino Solaguren, ‘Contingencia y creación en la filosofía de Duns Escoto’, Verdad y Vida, 24 (1966) pp. 55-100.
Particularly clear on the double sense of the term is the statement of an anonymous author of the fifteenth century, reprinted by Léon Baudry (ed.) La querelle des futures contingents (Louvain 1465-1475) (Paris: J Vrin, 1950) pp. 126-133 (127): ‘contingens igitur in prima sui divisione est duplex. Unum quod ex significato idem est quod possible; et is accipitur contingents absolute, non considerando contingens per habitudinem et respectum ad causam suam. Aliud est contingens quod est distinguitur a possibili quia includit habitudinem et respectum ad causam que in producendo (procedendo?) potest inhibiri.’
[4] Here we should note at least the astonishing fact that the conceptual preparation for modern society preceded its institutionalization: High, contingent, selectivity was a problem of thinking, before any real needs came up to organize mechanisms for contingent selection on a large scale. Parsons could see here a corroboration of his hypothesis, that large-scale evolutionary change is controlled on the highest cybernetic level, i.e. by the cultural subsystem of the action system. Cf. Societies: Evolutionary and Comparative Perspectives (Englewood Cliffs: Prentice Hall, 1966) pp. 113ff.
[5] The scientific and empirical status of these ‘other possibilities’ is, bluntly stated, unknown. Its clarification will be one of the most important theoretical and methodological tasks of social sciences in the future. Max Black in his important essay on ‘possibility’ leads us into doubts about the ‘ghostly view’ of other possibilities as pure illusions, shatters en passant the traditional construction of possibilities in terms of a theory of modalities – and leaves the scared reader on the edge of this precipice with the advise ‘to undertake a detailed survey of how we do in fact use the words possible, possibility, and their cognates.’ See Models and Metaphors: Studies in Language and Philosophy (Itaca, N.Y.: Cornell University Press, 1962) pp. 140-152.
[6] The contingency of ‘the’ world implies, however, a metaworld as the horizon of all possibilities from which the actual world is selected by creation or by evolution.
[7] The Cartesian tradition of subjective metaphysics would deny this statement and exempt consciousness from contingency. But the immediacy of experience of one’s own consciousness is no sufficient reason to exclude its contingency.
[8] See, however, James Olds, The Growth and Structure of Motives: Psychological Studies in the Theory of Action (Glencoe, Ill.: The Free Press, 1956) pp. 198ff.: ‘Within a presented object system, contingency is single in the sense that if I perform a particular set of behaviours I will achieve a particular outcome for my labours. Within a non-presented object system, there is double contingency: If I perform the behaviours that will take me to that object successfully, then, if I perform a certain set of behaviours (manipulating the object) I will achieve a particular outcome for my labours.’ Social contingency (double contingency in the sense of Parsons), then, is only a special case of this two-step dependence. But this argument confounds contingency and dependence. Consequently, social contingency, i.e. the interaction of subjects knowing each other as subjects, is constructed as a special case of a much too simple case.
[9] Parsons in some formulations comes close to this view. See Talcott Parsons, Robert F. Bales and Edward A. Shils, Working Papers in the Theory of Action (Glencoe, Ill.: The Free Press, 1953) p. 35.
[10] We owe this splendid formulation to Paul Valéry Animalités, Oeuvres, (Paris: ed La Pléiade, 1957) p. 402.
[11] James Old op. cit. touches upon this point at p. 205 describing the gain in terms of economy of time and movement. See also Donald M. MacKay ‘Communication and Meaning – a Functional Approach’, F.S.C. Northrop and Helen H. Livingston (eds) Cross-Cultural Understanding: Epistemology in Anthropology (New York: Harper & Row, 1964) pp. 162-79 (163).


Luhmann, N. ‘Generalized Media and the Problem of Contingency’, in Loubser J. et al (eds) Explorations in General Theory in Social Science: Essays in Honour of Talcott Parsons (vol. II) (New York: The Free Press, 1976) 507-532, pp. 508-509.

How Not to Criticise Social Theory

There is no universal agreement about what the proper objectives of theory should be. Perhaps for that reason commentators can often be seen to criticise theories or theorists in ways that seem unfair. To assess a theory adequately it is necessary to consider what it is actually trying to achieve. With that in mind, Baert and Carreira da Silva have identified four common mistakes critics make when registering their dissatisfaction with a theory:

1. Explanatory Reductionism: the explanatory reductionist assumes that all theories are about explaining, or predicting social phenomena. That may be a common goal but it is not ubiquitous. Some theories aim to provide understanding rather than explanation. The authors talk about some theories designed to develop self-understanding: ‘they allow us to consider some of our presuppositions and to re-describe and assess our present societal constellation’.

2. Perspectivism: here the critic focuses on the perspective of the theory, that slice of social life it aims to describe (power, agency, values). The critic often implicitly suggests that there is no independent measure by which to judge these differences. The authors argue that although few critics are explicit about this, many practice it and perspectivists are best identified by the way in which they focus on the differences between theories. But there is more to social theories than the differences of subject matter between them. And they can legitimately be judged and compared by reference to a variety of measures including intellectual depth, analytical clarity, originality, explanatory power and internal consistency.

3. Externalism: this must be the most common pitfall, criticising a theory for failing to do something it never set out to do in the first place. Even though it is almost always better to evaluate theories for internal consistency, the authors identify two expectations. External criticisms can be useful as (i) a stepping stone to internal critique and (ii) as a medium for developing one’s own social theory. Beyond this, however, criticising a theory for ignoring something is not going to be very helpful.

4. Political Effect: here the theory is criticised for the actual or potential effect it might have on politics or society generally. The authors argue that this sort of consequential reasoning should not normally be allowed to interfere with the intellectual appreciation of a theory.

So there you have it, the four most common mistakes commentators make when critiquing social theory. Look out for them…


Baert, P. and F. Carreira da Silva Social Theory in the Twentieth Century and Beyond (2nd edn) (Cambridge: Polity Press, 2010) pp. 8-11

To Cybersemiotics through Bateson, Peirce and Luhmann

This post introduces a very interesting presentation from Søren Brier, Professor for Semiotics of Information, Cognitive and Communication Science at Copenhagen Business School. It is entitled ‘Cyber(bio)semiotics: Transdisciplinary Through Bateson, Luhmann, and Peirce’. As you might expect from the title, Brier discusses the intellectual currents and developments that led to cybersemiotics. It is a little slow-moving, particularly at the start, but it is rich in content taking in the impact of Norbert Weiner, Gregory Bateson and Heinz von Foerster on cybernetics, information theory and general systems theory; the important concepts of autopoiesis and structural coupling from Maturana and Varela, Luhmann’s social systems theory which allows new approaches to communication, and Pierce’s biosemiotics. Apart from achieving its own aims, the presentation usefully (if partially) contextualises Luhmann’s work. Here is the bio Brier provides:

I am an interdisciplinary researcher that has moved from an MA in biology (ecology and behavioral sciences) over a gold medal thesis in psychology (philosophy of ethology) both Copenhagen. U. through a PhD in philosophy of information sciences (Roskilde U.) after 10 year teaching at the Royal School of Library and Information science to a Habilitat in Cybersemiotics combining the natural, life, social and technical sciences with the humanities though an integration of a Peircean biosemiotic and Luhmann’s autopoietic system science into a transdisciplinary framework (Copenhagen Business School).

Or follow this link

Science, Objectivity and the Passion for Explaining

In this essay Maturana negotiates several complex philosophical questions in a surprisingly accessible way. In it he answers three questions:

  1. What characterises cognitive constructivism?
  2. Why, under this view, is it not possible for science to achieve true objectivity?
  3. If this is so, why does science not slip into relativism?

As ever, the term ‘science’ encompasses the social sciences as well. Maturana sees no justification for a cleavage between the two – certainly not the old distinction between quantitative and qualitative methodologies. Throughout, he considers these issues not as a philosopher but as a biologist reflecting on science as ‘a cognitive domain generated as a human biological activity’.


First, what kind of person is the scientist? What is the motivating force not only of the scientist, but the domain of science itself?

For Maturana, the fundamental emotion that enlivens science, that determines it as a domain in which science takes place as a human activity, is curiosity; curiosity which emerges as a desire or passion for explaining. In this domain explanations are adequate – they qualify as legitimate scientific explanations – if they meet the ‘criterion of validation of scientific explanations’. That criterion is policed by the scientific community itself.

Already, he has set the scene for a discussion of cognitive constructivism and the problem of objectivity. Already he has framed the issue – the nature of science – as a question situated in the realm of human activity. To further channel our attention he turns to the etymology of the word ‘science’. Originally ‘science’ was a synonym for knowledge; any knowledge whose validity could be defended on methodological grounds. In modern times this changed. Science now refers only to a knowledge validated through one method, the scientific method. This is due to two important (often unstated) assumptions:

  1. That the scientific method, through verification, corroboration, or the denial of falsification, reveals or specifies, an objective reality that exists independently of what the observers do, even if it cannot be fully known; and
  2. That the validity of scientific explanations and statements rests on their connection with such objective reality.

In refuting these claims Maturana answers the first two questions listed above. While the conventional view assumes for science direct access to an objective independent reality, Maturana argues that scientific activity is not structured in a way that would facilitate this direct access. Put differently, science cannot attain pure objectivity. A consideration of just what it is that scientists do when they are doing science helps to explain.


What Scientist Do: Scientists as Observers

Scientists are observers. As observers they distinguish in language the different kinds of entities that they bring forth as objects of their explanations and their reflections. Thus, scientists are human beings in language making distinctions in language. As well as observing the world, scientists are engaged in observing themselves. Scientists are ‘always trying to observe their observing in an attempt to describe and explain what they do. They begin to reflect in language upon what they do and how they do it as they operate as “languaging animals”.’

Again, science is considered very much as a human activity. Accordingly, the observer ‘happens in observing, and, when the human being that the observer is dies, the observer and observing come to an end’. When we consider the activity of the observer we have two choices. We can either take for granted and ignore the cognitive abilities of the observer as ‘unexplainable properties’. Or, with Maturana, we can take these seriously and find they must be explained by showing how they arise as a result of the biology of the observer as a human being.

The scientist as a human being is, like all living systems, a structure-determined system. The scientist is an operationally closed system that does not admit instructive interactions. Nothing external to a living system (the observer) can specify what will happen in him or her. It follows ‘that the observer as a living system constitutively cannot make explanations or statements that reveal or connote any­thing independent from the operations through which he or she generates his or her explanations and statements’ (emphasis added). Thus the thesis of operational closure is central to an understanding of cognitive constructivism (for more see here).

The word ‘cognition’, then, when used by scientists must signify what scientists do or how they operate when they generate their cognitive statements i.e. new knowledge. To understand what we mean by ‘knowledge’, we must first specify what we mean by a ‘cognitive domain’, and by ‘actions’.

‘Actions’ refer to all that we do in any operational domain to bring forth a discourse. Thus, to think is to act in a domain of thinking. Science is just one operational or ‘cognitive domain’. There are as many cognitive domains as there are domains of adequate actions (distinctions, operations, behaviours, thoughts, or reflections) that the observers accept. Each of them is constituted and by the criterion of validity proper to that domain.

Accordingly, knowledge in the science domain is constituted by adequate actions that take place there. It is constituted by those distinctions, operations, behaviours, thoughts, or reflec­tions which take place in the science domain and are assessed according to the ‘criterion of validation of scientific explanations’.


Scientific Explanations

Science is irrevocably a human activity but to understand further why it cannot achieve pure objectivity we need to consider the nature of the explanations produced by science.

Explana­tions are propositions presented as reformulations of experiences that are accepted as such by others. Therefore, there are as many different kinds of explanation as there are different criteria of acceptability; that is, scientific explanations are just one kind of explanation. The different criteria of acceptability that we recognise as such define the different explanatory domains with which we operate.

Explanations are experiences of the observer (the scientist) that arise as she operates in her domain of experiences; the science domain. The science domain, like all explanatory domains, is an ‘expanding experiential domain in which the observer lives new experiences, asks new questions, and unavoidably generates new explanations in an unending, recursive manner’.


The Generative Mechanism

We have said that scientific explana­tions are propositions presented as reformulations of experiences. The experience (phenomenon) to be explained is reformulated in the form of a generative mechanism which, through its operation, enables a standard observer to have in his domain of experiences the experience to be explained. (Generative mechanisms might include theorems, equations, models, doctrines etc.)

As generative mechanisms, scien­tific explanations are constitutively mechanistic. They deal only with structure-determined systems. As such, scientific explanations take place in the domain of experiences of the standard observer, and scientific explanation operates only in the area of structural determinism in which it is proposed.

The conventional view finds scientific explanations to operate as phenomenic reductions. If this were so science would indeed be objective. But scientific explanations cannot operate in this way. The relation between the phenomenon to be explained and the explanation is non-reductionist. More precisely, the relation between the phenomenon and the generative mechanism – the mechanism that generates the explanation – is non-reductionist. This must be the case because the operations giving rise to the generative mechanism, and to the relation between it and the phenomenon to be explained, ‘intrinsically take place in independent and non-intersecting phenomenal domains’. This is the reverse of reductionism.

Maturana finds this understanding to be liberating because it allows us to see that:

‘… there are phenomena like language, mind, or consciousness that require an interplay of bodies as a generative structure but do not take place in any of them. In this sense, science and the understanding of science lead us away from transcendental dualism.’

The scientific explanation and the phenomenon to be explained lie in different domains. It is possible to understand the phenomenon to be explained – but only through the generative mechanism. The phenomenon to be explained lies in a separate domain to which we have no direct access – only an indirect (and abstract) relation through the explanation provided.

Maturana, then, makes no assumptions about an objective independent reality. He finds that the scientist uses scientific explanations only to explain his experiences by reformulating them with other experiences, with reference to the criterion of validation of scientific explanations:

‘The scientist does not use them to reveal or connote anything deemed to be independent of what he or she does. In fact, the contrary happens, because for these same reasons scientific explanations enter into the constitution of the world or worlds that we standard observers live through the transformation and expansion of our domain of experiences that they bring forth as we operate with the consequences of them in our experiential domain.’



Finally, if science has no direct access to an independent objective reality, what is left to stop science descending into relativism? Maturana does not answer the question directly, he addresses it implicitly throughout the essay. Science can only operate (it only makes sense) in relation to the criterion of validation of scientific explanations:

‘Scientists are rigorous in the endeavour to always be impeccable in the application of the criterion of validation of scientific explanations as they seek to generate scientific explanations.’

Scientists can only ‘do’ science with reference to the criterion of scientific validation. It is this criterion which specifies conditions of acceptability known to and agreed upon by the epistemic community of scientists. This is what stops cognitive constructivism from sliding into free-wheeling relativism. But it says nothing directly about ‘reality’. Reality, to borrow from von Foerster, is as it is. Reality carries on – regardless of whether or not our explanations describe it with precision, regardless of whether or not our explanations produce useful understandings.


Maturana, H. ‘Science and Daily Life: the Ontology of Scientific Explanations’, in Krohn, W. et al (eds) Selforganization: Portrait of a Scientific Revolution (Dordrecht: Kluwer Academic, 1990).

Paradox of the Word

Literature devoted to the medieval paradox is a fount of creativity and wit. This short piece considers two paradoxes of religious orthodoxy that have inspired writers through the ages, from medieval theologians, to seventeenth century Englishmen, to modern poets like Eliot interested in the metaphysical tradition. Both examples show the enjoyment their authors derived from the paradoxical nature of much religious dogma. Both dramatise the Incarnation, the teaching that ‘The Word was made flesh and dwelt among us’.

St Thomas Acquinas presents this core doctrine of Christianity as a couplet in the vesper humn Pange Lingua:

‘Verbum caro panem verum
Verbo carnem effict’
[Word became flesh and bread, Word made flesh]

With this conceit we are presented with the self-referential autopoiesis at the heart of Christian teaching: the paradox of the Incarnation of the Word of God. Here we do not speak of God the Father or God the Holy Spirit, but the ‘second person’ God the Word. The Catholic mass still employs the words of God the Word (the reported speech of Christ from St Paul’s first epistle to the Corinthians) at the moment of transubstantiation: ‘This is my body. This is my blood.’

Here, then, is another example of the performativity of words; words from the Word. The moment of transubstantiation is a moment of high biblical reflexivity when, as Ong puts it: ‘God the Word became flesh and this same Word, when He wishes to convert bread into His flesh uses words as the instruments for His action.’

Another familiar medieval paradox, associated with the sermons of St Augustine, is the paradox of the Verbum Infans. The Verbum Infans, the child Jesus, was not only the infant word, but also the silent word. Infans a synonym for mutus; the unspeaking word, presenting a ‘strange and startling paradox, but an unmistakable dogmatic fact that the Word of God initiates His personal mission among men in the inarticulate role of a child’. The paradox has been enjoyed by many including Lancelot Andrewes in a sermon: ‘What, Verbum infans, the Word of an infant? The word and not to be able to speak a word.’ And T.S. Eliot in Gerontion:

‘The word within a word unable to speak a word
Swaddled in darkness.’


Ong, W. ‘Wit and Mystery: A Revaluation in Mediaeval Latin Hymnody’, 22(3) Speculum (1947) 310.


Cognitive constructivism of operationally closed systems is explained succinctly by Knorr Cetina:

‘Closed systems are systems which operate entirely within their own medium and machineries of world construction. An example is the brain, which the biology of cognition sees as informationally closed towards its environment. Perception, for example, is accomplished by the brain, not the eye, and the brain can only construe what it sees from signals of light intensity which arrive at the retina. In order to form a picutre of the nature of the source of these signals, the brain makes reference to its own previous knowledge and uses its own electro-chemical reactions. Phrased differently, in perception the brain only interacts with itself and not with an external environment. The brain reconstructs the external world in terms of internal states, and in order to accomplish this the brain ‘observes’ itself (Maturana and Varela, 1980).

… Closed systems cannot build, with the environment of interest to them, a shared life-world. They lack the possibility of co-presence postulated by Schütz (1970) as an important feature of fact-to-face situations. They equally lack co-temporality, the possibility of conjoint time, and the possibility of conjoint status of human agents with the status of nonhuman entities or objects.’

At the time of writing Knorr Cetina was researching the particular world of high energy expermimental physics as a closed system. For Maturana on the constructivism of science see here.


Knorr Cetina, K. ‘Primitive Classification and Postmodernity: Towards a Sociological Notion of Fiction’, 11 Theory, Culture and Society (1994) 1.

Knowledge and Power

In their book The Power of Scientific Knowledge Grundmann and Stehr ask two important questions:

  1. How does scientific knowledge become powerful in practice?
  2. What counts as knowledge for this purpose?

This post presents some of the key points they highlight. The authors are not aiming to produce a general theory of knowledge application, but rather to provide ‘some historical-analytical tools and data for such an endeavour’. [It should be noted that they employ the more inclusive German understanding of the word ‘science’ which encompasses knowledge produced by the social sciences]

  1. What Counts as Scientific Knowledge?

Harold Kroto, a British chemist and Nobel laureate, presents a simple answer to the question ‘what counts as knowledge’. Kroto argued that there are many theories but only a few that are true. True theories are facts that have been found to work in practice through experimental work.

This explanation is not sufficient to explain how some knowledge becomes powerful while other knowledge is largely ignored. It presents only an ex post facto response to the question, taking no account of a wide body of evidence from the history of science and the sociology of science which shows that research is always accompanied by:

  • uncertainty
  • experimenter’s regress, and
  • trials of strength between proponents and opponents the theory often decided only by a generational shift i.e. ‘opponents eventually die, and a new generation grows up familiar with [the theory]’[1]

A number of models employed in the past to deal with this question emerge from the literature. Two of the more successful are presented here. They emerge the traditional philosophical distinction between truth and utility, which can be traced back at least to Francis Bacon’s Novum Organum (1620). When we employ this distinction theory and practice are alienated from each other. The ruling code in science becomes truth and not usefulness. Truth con­trols use. This forms the basis of the Necessitarian Model.

Necessitarian Model:
The Necessitarian Model conflates knowledge with truth. For knowledge to become powerful it must represent objectivity and truth. Grundmann and Stehr argue that this is the ‘taken-for-granted answer among the public, many policymakers and members of the scientific community’. But it is deceptive. Several studies now show that what counts as objectivity and truth is subject to historical change.

Insofar as the Necessitarian Model still holds sway in modern society, and it does, the reason can be linked to dominant norms in the scientific community. Knowledge, truth and power may be conflated in light of scientific norms that insist that ‘knowledge claims are at their best if they are trans-historical and trans-situational’. Accordingly, absence of context-specificity is seen as benefit, as adding to the validity of a knowledge claim.

The Instrumental Model is similar to the Necessitarian Model, indeed it may be thought to build on its claims.

Instrumental Model:
The traditional instrumental model of knowledge rests on a number of basic assumptions:

  1. Knowledge is determined by the ‘logic’ of science in conjunction with the nature of the world of objects.
  2. Societal and cultural factors surrounding the producers of knowledge have no influence on the production of knowledge or on the context of justification.
  3. Knowledge exists independently of the context of application. It is ‘value neutral’ and can be applied for good or bad purposes.
  4. Scientific knowledge is objective and rational and universal. Its utilization is not impeded by the special circumstances of time, place, and social condi­tions.

Set against these models, Grundmann and Stehr present a new understanding of how knowledge becomes powerful in practice.

Grundmann and Stehr’s Approach:
The authors move away from notions of ‘knowledge as pure objectivity’ or ‘knowledge as truth. They define knowledge as the capacity to act (or capability of taking action); that is, as the possibility of ‘setting something in motion’. This understanding actually derives from Bacon. The famous invocation ‘scientia potentia est’, is usually attributed to Bacon and translated simply as ‘knowledge is power’. In fact Bacon claimed that the particular utility of knowledge came from its ability to set something in motion. Accordingly, Grundmann and Stehr translate ‘potentia’ as potential, not power. Knowledge is the potential to effect change.

We know that when new knowledge comes into a policy field, it can destabilize existing social relations. When that happens, interests have to be redefined; they are no longer understood as ‘a given’. With this in mind, we can understand knowledge as the first step toward action. Knowl­edge puts us in a position to change reality.

There has always been interest in the power and promise of scientific knowledge. By contrast, systematic considerations about the influence of social condi­tions on knowledge are relatively recent. Perhaps since the Enlightenment, the prevailing view has been not what knowledge will do to our society, but instead what a deficit of knowledge might do. More is always better, and ‘concern with the impact of knowledge on soci­ety receded as a problematic issue’.


  1. How does Knowledge become Powerful?

Once ideals become institutionalised in policies they assume a kind of reality that hides their earlier contingency. An element of historical inevitability arises; as though what has happened was bound to happen. When the link between knowledge and politics starts to appear inevitable it becomes the task of the historian and critical social scientist to ‘unravel this apparent inevit­ability’:

“Michel Foucault used the term discourse to describe the reality of thoughts and practices enshrined in a specific historical period. He used the term archaeology to describe the efforts needed to analyze and deconstruct these discourses.”

Knowledge can become powerful in two different ways, either through technical or through social applications.

  1. Technical applications: through the work of scientists and engineers, and the technical artifacts they produce. The power of knowledge manifests itself in working machines or drugs.
  2. Social Applications: social scientists and their links to the political, economic, or social world. The power of knowledge manifests itself in social action i.e. effective political interventions into economy and society through laws, regulations and policies.

But how does this happen? How do we get to application? The established distinction between technology push and demand pull has generated two general theories of how knowledge becomes powerful through social action:

  1. Technology push – knowledge flows into society: it originates in basic research, continues through applied research and ends in technical or other practical applications. “This linear model makes use of the flow metaphor and implies that for knowledge to become practical, barriers have to be broken down so that knowledge can run freely (‘science push’).”
  2. Demand pull – knowledge is commissioned from users: knowledge can be produced on demand and deliver solutions as required (‘social pull’). This approach has many variants e.g. science functioning as provider of legitimation.


The Power of Ideas
If knowledge is about setting something in motion, we should ask (i) what is the utility of knowledge – what does it do? And (ii) who are the doers and what are their requirements?  Grundmann and Stehr argue that policy is based on selective bodies of knowledge, not full or comprehensive knowledge. Additionally, scientists can ‘speak truth to power’ yet find power decidedly indifferent. Accordingly, it is useful to ask what are the preconditions for knowledge to become powerful?

First, from the policy-maker/politician’s perspective, what is the utility of knowledge?
Knowledge can be used for:

  1. problem-solving and for
  2. legitimizing decisions (or both).
  3. Knowledge can also operate at the level of influencing the construction and fram­ing of policy problems without leading to specific policies.

Knowledge can also be used as a powerful tool to shape reality. This is familiar to sociologists and media studies scholars who speak of issue framing (Goffman 1974; Hajer 1995). It is well known that frames:

  1. Select segments of reality for further attention, and
  2. Provide schemata of interpretation that people use to locate, perceive, identify, and label.
  3. Define problems and determine what a causal agent is doing; they identify the forces creating the problem. Frames are instrumental in diagnosing causes and they solicit moral judgments.
  4. Suggest remedies, offer and justify treatments for the problem and predict their likely effects.

In this sense ideas can be said to have an independent power. That is to say, if in any policy issue the problem was framed in a different way, we would see a differ­ent sequence of events unfolding, leading (in all likelihood) to a differ­ent outcome.

Second, who are the doers and what are their requirements?
Clearly, frames need to be constructed and introduced into the policy process. This is where policy entrepreneurs, journalists, and political and academic elites can play an important role. Thus, any simply binary opposition between the scientist (aiming to explain reality) and the politician (aiming to turn something into reality) is complicated by the emergence of many more societal actors.

Naturally, the authors handle this issue in a more subtle and complex manner than can be rendered here. For present purposes it suffices to note that policy entrepreneurs play an instrumental role in getting an issue onto the political agenda. They do so largely by persuading others to accept their definition of the problem. If the definition of the problem tends to determine the nature of the solution this power of persuasion of critically important. It gives policy entrepreneurs significant levels of societal influence and even control.

In relation to the requirements of policy entrepreneurs and policy makers, an important precondition is that scientific knowledge is transformed into ‘usable knowledge’. In this regard the authors differentiate between:

  1. Knowledge for practice: the potential of science to be put into practice, and
  2. Practical knowledge: the actual practical use made of science

This distinction emphasises the reality that we cannot be certain a priori whether or not scientific research will ultimately be turned into knowledge for action.

Similarly, while knowledge can lead to social action, it is at the same time the result of social action. It follows that the capacity to act is by no means identical with actual action: ‘knowledge is not itself already action’.

Thus, usable knowledge is not enough. An ‘epistemic community’ is another requirement. An epistemic community is a ‘knowledge-based network of specialists who share beliefs in cause-and-effect relations, valid­ity tests, and underlying principled values and pursue common pol­icy goals’. Epistemic communities are most influential ‘where policymakers have no strong preconceived views, and the issue is a first foray’. Thus, knowledge plays an active role only where, for whatever reasons, there is latitude or necessity for decision-­making. When that latitude is present we will see that:

‘The qualities necessary for an understanding of practical knowledge, which make possible the realization of knowledge, are on the one hand particular findings, and on the other, i.e. on the side of those taking action, the control of situationally specific conditions. These abilities, which make implementing findings possible, can be called capacity to shape, in contrast to knowledge as the capacity to act. […]
In modern society, at the intersection of possibilities to take action and to shape events, the rapidly growing group of experts, advisors, and consultants as mediators of knowledge find their employment and exert their influence.’


Grundmann, R. and N. Stehr, The Power of Scientific Knowledge: From Research to Public Policy (Cambridge: CUP, 2012).
[1] Citing Max Planck, p. ix.

Why do Theory?

“Why do theory? You have to put so much into it and you get so little out of it.”

Given the increasing drift towards empiricism and clinical studies, and the associated de-emphasising of theoretical research across the disciplines, this becomes an important question to consider (and what follows constitutes only a first attempt to provide some explanation) .

Doubtless the first half of the equation is true. ‘Doing’ theory has always placed considerable burdens on the researcher. What is less often acknowledged, however, is the many additional burdens placed on the contemporary scholar – and not by the complexities inherent in a particular theory, but by the academy.

What do I mean by this? Firstly, of course, any theory capable of describing all or some aspects of the very complex dynamics of modern society demands a great deal of our time and attention. This is perhaps particularly true of the kind of theory I am drawn to; theory that abandons the a priori. It always seems to me that the a priori maps out, prematurely, significant parts of the journey, determining in advance what should be left indeterminate. The a priori becomes the impermissible safety net.

Secondly, and less obviously, much of the responsibility for the present state of affairs, which demands so much more input from scholars today than scholars of previous generations, lies with the academy. Lizardo speaks of the damaging effects of the de-institutionalisation, devaluation and de-structuration of theory by the academy. Taking each in turn:

De-institutionalisation: Even sociology has seen the de-institutionalisation of theory at the graduate level so that the teaching of theory is reduced “to a single ‘omnibus’ course that attempts the impossible task of going from Montesquieu to Judith Butler in one semester”. Moreover, theory is generally no longer taught by theorists but by those whose work is only ‘relatively theoretical’.
Devaluation: within the academy theory has been devalued in favour of empirically ‘applied’ work which produces ‘useful’ research.
Destructuration: there is no longer any sort of ‘hierarchy of modes of doing theory’. We are left with a confusing and ‘rudderless heterodoxy’, and ‘very little agreement as to the “rules” of the theoretical game’(Lizardo). (But see Fuller’s deftly witty four-fold categorisation/hierarchy of theorising in which he rightly locates Luhmann in the fourth category: available here)

All of this adds to the demands on scholars who wish to engage in theoretical research. Accordingly, we can only agree with that the first half of the equation is correct.

The second half of the equation – ‘you get so little out of it’ – must depend in part on the particular ambitions and limits of the theory employed. If a researcher is not getting much out of theory it may be due to a limit inherent in the theory, or an unfortunate limitation in the scholar, or some external factor. But, for present purposes, this way of approaching the problem runs the risk of evading the bigger question. What, after all, is the value of theory? Why are we drawn to abstraction? What does it offer?

It is interesting that scholars so rarely address this topic directly. The great theorists probably considered the value of theory to be self-evident. But today we are enduring a period in which there is a dearth of new theory or theorists. What passes as contemporary theory now is generally 30–50 years old (Lizardo). Following de-institutionalisation, fewer people are doing theory now. Those who are, the few who address themselves reflexively to the topic, are full of warnings about the fate of particular disciplines in the perceived absence of a central role for theory; or warnings about the fate of theory itself in the near future. Before moving to the positive, I want to take a moment to consider the negative. So we have seen a retreat of theory in the social sciences. So what?


Articulations of the Importance of Theory

Here I take the example of two scholars frustrated with the current state of affairs in their discipline, educational studies, at an interesting moment of change in the mid-1990s.

Ball starts from the negative arguing that a de-theorised field like his own leaves the researcher ‘prey to unexamined, unreflexive preconceptions and dangerously naive ontological and epistemological a prioris’ (Ball, p. 265-66). The concern is that without theory one is left with arguments from morality, common-sense ontologies or recourse to authority (e.g. the self-referential system of doctrinal law). Both he and Shilling argue separately that the field of educational studies experienced something of this deficit when the ‘redemptive’ view of education, which held sway from the 1930s to the 1960s, was abandoned. The earlier period was characterised by policies of ‘progressive utopianism’; education policies that revolved around issues of social justice and the emancipatory power of education – policies that promoted education as integral to welfare (Shilling, p.107).

Thereafter, following the pessimism of the 1970s and complications in the 1980s researchers took on new identities as ‘school effectiveness researchers’ and ‘management theorists’. Issues related to system design and social justice were replaced by implementation studies focussed on issues like ‘quality’, ‘evaluation’, ‘leadership’ and ‘accountability’ (Ball, p. 258). It is a familiar story, the legacy of which we are still discussing today. And education studies is not alone in this sort of shift of focus. We can trace the move from progressive ideologies to technocratic problem-solving across a range of disciplines in the same period.

In all cases we are tracing the effects of the retreat of theory, of what Shilling describes as the intellectual stagnation of the late twentieth century, which he argues has three consequences:

  1. Loss of cultural capital: the neglect of significant ideas, concepts and theories.
  2. Dominance of the classics: the ideas of the greatest thinkers overshadow their successors so that little new development occurs. There is an associated failure to build on the advances of previous paradigms.
  3. Technical refinement: specialisation is accompanied by a neglect of the most important questions facing social theory and society. Scholars rush to the particular and thereby ignore those central theoretical problems that have sparked the most creative and important work in the social sciences. (Shilling, p. 106)

I am not sure about the ‘dominance of the classics’ (referring to thinkers of the nineteenth and early twentieth centuries), although I agree that the influence of celebrated thinkers of previous generations continues. Lizardo provides a different explanation, pointing to a golden-age of theory in the post-war decades, which continued (in France at least) until the 1980s. He suggests that structural changes in the academy mean that we should not expect a successor generation any time soon.

With so much that is negative, is there anything positive to consider?


Benefits of Theory

Finally we return to the substantive question, what is the point of theory? With so many burdens, why bother doing theory at all? I agree with the overall thrust of Ball’s view of the creative potential of theory, and as he states the case so lyrically I quote him at length:

“Theory is a vehicle for ‘thinking otherwise’; it is a platform for ‘outrageous hypotheses’ and for ‘unleashing criticism’. Theory is destructive, disruptive and violent. It offers a language for challenge, and modes of thought, other than those articulated for us by dominant others. It provides a language of rigour and irony rather than contingency. The purpose of such theory is to de-familiarise present practices and categories, to make them seem less self-evident and necessary, and to open up spaces for the invention of new forms of experience. […]

The point about theory is not that it is simply critical. In order to go beyond the accidents and contingencies which enfold us, it is necessary to start from another position and begin from what is normally excluded. Theory provides this possibility, the possibility of disidentification – the effect of working ‘on and against’ prevailing practices of ideological subjection. The point of theory and of intellectual endeavour in the social sciences should be, in Foucault’s words, ‘to sap power’, to engage in struggle, to reveal and undermine what is most invisible and insidious in prevailing practices. Theories offer another language, a language of distance, of irony, of imagination.” (Ball, pp. 266-68)

When Ball goes on to talk about social transformation he becomes less convincing. Knowledge becomes powerful either through technical or social applications, but the complex relationship between knowledge and social action is not self evident (but this should form the subject of a separate post) (Grundmann and Stehr). There are different modes of theorising. The quoted passage gives a flavour of the mode of theorising I find satisfying. In a time of encroaching normativity, of juridification and ‘genetification’, when too much that should be dynamic in the realm cognition is consigned to the inertia of ontology, we need theory. We need its rigorous language of distance, defamiliarisation, and consciousness-raising tools of reflexivity.

That theory provides a language of imagination corresponds well, in my view, to Luhmann’s practice. Luhmann described his theorising as empirical – using the term in a quite particular way. It denoted his rejection of Kantian idealism, a priorism, and notions of transcendentalism. Absent all this, the theorist has only contemporary standards of knowledge, logic and his own ability to rely on. Hence the imaginative approach to theory building. As a reader – particularly of his footnotes – one sees the thief at work, lifting ideas from a multiplicity of different disciplines – neurophysiology, evolutionary biology, information theory, contemporary calculus, cybernetics etc. – translating them into the language of social theory and somehow making it work. What would be a liability in the hands of another serves as reinforcement here.

Richard Swedberg is reviving the imaginative approach to theory in a somewhat similar manner. Swedberg emphasises the notion of theorizing as a process and as an acquired skill; theorizing as a creative activity and a skill to be developed (see his LSE lecture here).

In this vein, Luhmann’s theory is markedly enabling. His empiricism is inclusive. His theory provides not an ideology but a methodology, one which sits within an overarching but flexible framework of shared understanding. How to begin? Draw a distinction. Want to understand a field of inquiry, observe the observers operating there. What distinctions do they use? How do those distinctions structure their inquiries? If every distinction issues from the blindspot, what are the blindspots that they reveal?

Anne Enright advises would-be writers to ‘find a place to stand’. She is addressing writers of fiction, but her advice must apply forcibly to academic researchers who are expected to issue explanations of the world around them and even, at times, prescriptions for the resolution of complex problems.

No one has read the entire canon. Some are drawn to theory, and to particular types of theory. There is always an element of subjectivity and intuition. Pure objectivity is a fiction. It seems to me that finding a theory that you can work with, that you find plausible and insightful, that is geared to the production of new knowledge, is a way of finding your place to stand. And when it comes to the quid pro quo, to getting something out of theory, this is not inconsiderable.



Ball, S. ‘Intellectuals or Technicians? The Urgent Role of Theory in Educational Studies’, 43(3) British Journal of Educational Studies (1995) 255.
Fuller, S. ‘Steve Fuller’s Guide to Reading Social Theory’, (Jan. 2016) available here.
Grundmann, R. and N. Stehr The Power of Scientific Knowledge: From Research to Public Policy (Cambridge: CUP, 2012).
Lizardo, O. ‘The End of Theorists: The Relevance, Opportunities, and Pitfalls of Theorizing in Sociology Today’, pamphlet based on the Lewis Coser Memorial Lecture, delivered at the 2014 Annual Meeting of the American Sociological Association in San Francisco; available here.
Mullally, S. ‘Theories of Rights’, (Oct. 2014) available here.
Shilling, C. ‘The Demise of Sociology of Education in Britain?’, 14(1) British Journal of Sociology of Education (1993) 105.
Swedberg, R. ‘Before Theory Comes Theorizing: Or How to Make Social Science More Interesting’, Annual British Journal of Sociology Lecture, LSE Oct. 2015 available here.


Autopoiesis – from the Greek ‘poiesis’ meaning reproduction – means simply self-reproduction. The concept was developed by the evolutionary biologist Humberto Maturana to describe a system capable of reproducing itself using only its own elements; elements produced by the system itself. Luhmann redefines the concept so that it is capable of describing self referential systems. In the following short passage he states that autopoietic systems can be identified by:

‘… their ability to reproduce the elements of which they consist by using the elements of which they consist. Autopoietic systems are not only self-organising systems, able to form and change their own structure; they also produce their own elementary units, which the system treats as undecomposable, as consisting of an ultimate “substance”. Hence autopoietic systems are closed systems dependent on themselves for continuing their own operations. They define and specify their own boundaries. The environment, of course, remains a necessary condition for self-organisation and for autopoiesis as well, but it does not specify system states. It interpenetrates as “noise”, as irritation, as perturbation, and may or may not set off internal efforts of interpretation and readaptation. It does not produce inputs that specify the operation of the system.’


Luhmann, N. ‘The Evolutionary Differentiation between Society and Interaction’, in Alexander, J. et al (eds) The Micro-Macro Link (Berkeley: University of California Press, 1987).