Skip to main content

Science writing in the real world

Abstract

The objective of this contribution is to consider guides to technical writing. Since the professional writes what he does and does what he writes, guides to how you execute the one relate to how you perform the other, so this article is about more than just writing. While there is need for idiosyncrasy and individualism, there are some rules. Documents must have an explicit purpose stated at the outset. By their nature, documents in the applied sciences and business address real-world problems, but elsewhere activity may be laissez faire for which the cost-effectiveness in yielding innovations is contestable. A hallmark of written science and technology is that every statement is capable of being tested and capable of being shown to be wrong, and that methods yield repeatable results. Caution should be observed in requiring authoritative referencing for every notion, partly because of the unsatisfying infinite regress in searching for ultimate sources, and partly to avoid squashing innovation. It is not only the content of messages that matters, but reliability too. Probability theory must be built into design to assure that strong inference can be drawn from outcomes. Research, business and infrastructure projects must substitute the frequent optimistic ‘everything goes according to plan’ (EGAP) with a more realistic ‘most likely development’ (MLD) and the risks of even that not happening. A cornerstone of science and technology is parsimony. No description, experiment, explanation, hypothesis, idea, instrument, machine, method, model, prediction, statement, technique, test or theory should be more elaborate than necessary to satisfy its purpose. Antifragility – the capacity to survive and benefit from shocks – must be designed into project and organizational structure and function by manipulating such factors as complexity and interdependency to evade failure in a turbulent and unpredictable world. The role of writing is to integrate these issues, and communicate them so that the stakeholders share a vision before, during and after the project.

Introduction

This article addresses the question ‘how are scientific and technical articles to be written’? This is possibly easier to answer for the youngster in training than for the seasoned practitioner, hopefully not because old dogs can’t be taught new tricks but because universal prescriptions are elusive.

Review

Why is ‘writing’ important?

In the technical world, writing remains the primary means of communicating plans, actions and outcomes. Writing out the conceptions so others can follow has the benefit of clarifying the thinking, it acts as the blueprint for implementation, and it captures the experience upon completion. Writing and action interdepend. If the written plan is defective so will be the action. If the action is ineffective, honest writing must report that. As the teaching adage goes ‘if you can’t say it (or write it), it’s not worth knowing’.

Both the developed and developing world share the paradox of youth unemployment yet skilled job vacancies, not least in engineering, science and technology. The implication is that we are failing adequately to school enough professionals in the art of saying or writing it to the extent that it is worth knowing.

Are there any rules?

In his classic book Against Method Paul Feyerabend objected to a single prescriptive scientific method on the grounds that this would limit scientists’ activities and thereby constrain progress (Feyerabend1975). This need for idiosyncrasy is not confined to science. In the business world there is the requirement to differentiate products and services – to deliberately make them distinctively different so as to get noticed and make sales. This carries through even to business management in which if, say, Acme Widgets adopts the same management style as its competitors it will then perform the same as them and not become superior. Acme Widgets needs a different if not better management system to be competitive.

Does idiosyncrasy extend from method to write-up? When I started university teaching I had a firm idea of how science was done, and written up. My students soon cured me. They came up with approaches that did not fit teacher’s prescription, but worked nonetheless. Years later, as a mature student sitting at the back of the MBA class and paying attention not only to the lecture content but also to the lecturer style, it was evident that ‘this is how it is done’ did not generate the interest, learning and progress that ‘how do we solve this problem?’ did.

Might our main question be answered by surveying the readership to ascertain its wants, needs and expectations? The survey results would certainly be interesting, and they would not be irrelevant. But they might just not produce the ultimate answers. To enlarge on the points already made about novelty, consider the following. The market did not perceive a want, need or expectation for the iPod, iPhone and iPad. The market did not anticipate anything like these gadgets, and surveying potential consumers would not have pointed to these innovations. Before they were invented there was no market for the gadgets. It took the technical know-how and artistic flair of Steve Jobs to dream up the gizmos with which the market fell in love. Similarly, and obvious on reflection, a scientific discovery is new because no one thought like that before. The discovery will need to be communicated, and it will need to satisfy a market demand – pre-existing or self-created – if it is to be generally adopted.

There are problems though. Implications of ‘against method’ include that if anything goes, then everything stays, and all methods are of equal merit. This is dubious, and evidently not any method will ‘fly’ though beforehand it might not be possible always to distinguish the useful from the useless. Further, there is a contradiction. ‘Against method’ is a method of sorts. Surely there are at least some high level boundary conditions to writing business, scientific and technical articles? Surely there are minima – pretty well universal requirements of the sine qua non type – such as ‘there must be a purpose or problem that the writing addresses’? Surely there must be maxims – operating rules – to be adopted, such as ‘never write for fools’.

Some suggested minima and maxims are offered below.

Purpose and problem

A foremost minimum is that the work must have a clear purpose, preferably stated at the outset. This might seem an obvious requirement. But surprisingly often the purpose is not up front or it is confused or both. In one recent case the stakeholders objected that the purpose of the project plan did not appear until page 14 of a 78 page report. There is an asymmetry between reader and writer about the purpose of an article. The writer has foreknowledge of what she is writing, while there is a limitless range to reader preconceptions, so the reader sets out without knowing what is important in a rambling 14-page introduction. In today’s world where there is so much exciting stuff, the reader might not have the patience to get to page 14. To help the reader, tell her the purpose at the start and then, if necessary, explain. Most such explanations are incomplete or an infinite regress of justifications – my purpose is to solve problem P because P affects sustainability, and sustainability is a principle supported by the United Nations, and the UN was formed to support harmony and wellbeing, and harmony and wellbeing are pillars of civilized society, and so on.

Often with natural resources and people there are many conditions to be met. This poses problems with the ostensibly ‘clear purpose’. Even if the case of optimizing among multiple goals can be solved mathematically, in practice – be it written or actual science or technology – this is difficult. Usually the multiple goals get listed without prioritization, and conflicts, contradictions and indecisiveness arise. For example, the water release schedule from the dam specifies a flood be released on 31 August every year. The World Bank, who funded the dam construction, insists on compliance. But the Environmental Panel says that a ‘blue sky river flood’ is not expected by either the downstream biota or people and therefore that the stated objectives of perpetuating the natural river and protecting the riparian people are not met. Hence, unless there is rain on 31 August – highly improbable – dam operators suffer managerial paralysis for fear of the wrath of the Bank or the Panel.

One approach to the multiple conditions is akin to linear programming. Set a single objective function but subject to constraints or boundary conditions. For example, I say to my mining clients that the objective of mining rehabilitation is to minimize the cost subject to the following conditions: land capability meets specification c, landscape form meets specification d, soil loss is within limits e, soil fertility meets specification f, and so on. Without having to master the maths of linear programming, miners can identify with this type of objective. Minimizing cost makes intuitive sense. Each specification can be developed, measured, reviewed and revised independently, and measurement on all specifications, and of the cost, can be combined to show overall performance. The approach helps to highlight weaknesses and show where attention is needed to improve management and even understanding and method. It all accords with the adage ‘if you don’t measure it you can’t management and improve it’.

A further aspect to the purpose of an article is that it addresses a problem or constraint. If you want to do something, figure out what is holding you back, and fix it. If my car will not start it does not help to fill the half empty fuel tank when the battery is flat. In applied fields such as agriculture, environmental science and forestry, orientation around real world problems is pretty well a matter of course. In academia though the problems tackled are not obvious to some of us – filling the fuel tank instead of charging the battery. True, some inventions were made without any thought to application, and at some institutions there is a strong ethic of academic freedom – ‘we do not prescribe to you how and what to teach and research, and you are granted tenure on the basis of the regard your peers have for you’. It is nevertheless contestable that laissez faire approaches are more fruitful than ones directed at worldly problems. Taleb (2004) would probably say that perceived fruitfulness of laissez faire is ‘fooled by randomness’ – let loose a large number of academic geeks and some are expected to discover something by chance alone. Kahneman (2011) could argue WYSIATI (what you see is all there is) – the many miserable failures are not recorded so we don’t see these and bring them into our cost-benefit reckoning.

Testability

Science and technology have in common the qualities of repeatability and testability. The same instrument or method must yield a repeatable result. A hallmark of written science and technology is that every statement is capable of being tested and capable of being shown to be wrong. The testing would firstly be in the mind, at the time of reading, and secondly in physical reality if the reader were so moved.

An example of a statement that does not meet the criterion of testability is ‘it usually rains just before, at or just after new or full moon’. This prediction can never be found wanting since there is a new or full moon every fortnight so most of time is covered, and the qualifier ‘usually’ caters for the few days distant from the lunar events.

A requirement that every statement is testable relieves the need to provide proof or authority for every notion expressed. Giving proof and citing authority for every statement is of course tedious. But there are more serious difficulties. First, there are limits to substantiation. Positive proof is logically impossible – you can only disprove or, failing that, corroborate. And substantiation can become an infinite regress in search for an illusory ultimate authoritative source. Second, absence of evidence does not mean evidence of absence, as in the case of Taleb’s (2012) turkeys being fat, flourishing and untroubled by any grounds for downfall, until a few days before Thanksgiving. Third, insisting on an authority for every statement precludes new ideas, therefore suppresses discovery and forecast of catastrophe, and may defeat the very purpose of science.

Message reliability before content

Kahneman (2011) makes a point that professionals, even those with statistical expertise, have exaggerated faith in small samples, and pay undue attention to the content of messages without cognizance of their reliability. Kahneman and Taleb have recently popularized this issue on which, to their credit, reviewers for top journals have long been vigilant. These people have rendered a service to humanity by emphasizing the overlooked role of chance in our lives, the WYSIATI phenomenon, pseudoreplication, the irresponsible adoption of short-cut statistics like chi-square and Kruskal-Wallis instead of measuring actual sample variability, and so on.

Unfortunately the Kahneman-Taleb popularization of probability theory, and its strict observance in top journals, are, surprisingly, rarely the case in the applied field where the stakes are high and can run into US$ billions. Megaprojects, and the reports that motivate them, are conservative on cost estimates, optimistic on benefits, heedless of risks, and guilty of adopting EGAP (everything goes according to plan) when the individual probabilities of each of the issues is low and their combination in the best scenario a remote possibility. The prudent approach, too rarely applied, is to determine the ‘most likely development’ (MLD) which itself is subject to deviation. Little wonder that megaprojects typically turn out 150% over-budget or worse (Fleyvbjerg et al2003).

The want of applied probability theory extends beyond megaproject planning to common project construction and operation, and to day-to-day applied and even academic science.

A typical case is that of river management and instream flow requirements in southern Africa. Study and monitoring are by means of sampling. Single samples are taken at so-called representative sites at intervals over time. Even if a site is indeed representative of its river reach at the time of site selection, this representativeness might not endure because of changing riparian conditions, altered land-use in the catchment, water abstraction and effluent discharges. Many supposedly representative sites are chosen on, among other criteria, accessibility. Such sites are also accessible to vehicular traffic with its attendant pollution, to livestock that drink and urinate, dung and muddy the water, and to people who visit to picnic or wash or fish or dispose of rubbish. The criterion of accessibility is a virtual guarantee that the chosen site is not representative of upstream and downstream. But there is in any case no measurement of representativeness, and indeed no estimate of sampling variation within times as a baseline against which to compare sampling variation between times. The upshot is that the resulting data, sometimes costing US$ millions to collect, do not meet a long-standing criterion of good science, namely that it permits strong inference to be drawn (Platt1964).

In the case of the applied and academic situation, grass cover is another example. It is notoriously difficult to measure. The poor measurability of the parameter limits the confidence that can be attached to inferences regarding the relative merits of experimental and managerial treatments. Much of southern African rangeland theory hinges on how management increases or decreases basal grass cover, but because of the low repeatability of cover measurement (Mentis1981) the theory, though not necessarily mistaken, is more a matter of belief than hard science. Rangeland science and management, and other areas in the natural resource field, would forge ahead if there were quick and cheap means of testing alternative scientific hypotheses and management treatments. In other words if grass could be measured quicker, easier and cheaper then our turnaround with experiments would speed up and more confidence might be attached to results. This doubtless applies in many other instances.

Grass cover has fractal properties, like the famous case of coastline length (Lesmoir-Gordon et al2006) where the same pattern is repeated across spatial scales. As you scale up the coastline you see the same pattern of bays and peninsulas. Measuring coastline length depends on the scale or resolution, and walking along the water’s edge would yield a shorter length than a snail following the water’s edge. With grass, as resolution increases, there is a repeating pattern of clusters: colonies, tufts, tillers… Increasing resolution does not improve data quality because the density and shape of clustering is fuzzy and variable, and the interpretation of ‘hit or miss’ with the measuring tool (usually one or other type of sharp point that is lowered into the sward and watched to see if it strikes a cluster) is subjective and poorly repeatable at whatever scale. One option in this kind of circumstance is not to hike up resolution and complexity by resorting to gadgetry such as Tidmarsh Wheels and Levy Bridges, and sophistications such as distance between the nearest and second-nearest plants, but to scale down and simplify, permitting, albeit with a rough and ready technique, collection of large amounts of data quickly and cheaply. For instance, grass cover can be rated by striding through the sample area and scoring, at every fall of the right foot: 5 (lawn), 4 (nearly a lawn), 3 (more grass tuft than bare ground), 2 (more bare ground than grass tuft), and 1 (very sparse to bare). In one application of this simple approach it was possible for one 67-year old worker to collect 1500 30-point samples along a 555 km fuel pipeline construction servitude in one month, and to survive the ordeal to be able to draw strong inferences on improvement or otherwise in grass cover between one year and the preceding year.

Another common mistake in doing science and writing it up is an unquestioned assumption that statistical significance is absolute. This is especially so in monitoring studies. Strictly, the demonstration of statistical significance depends on several factors: the absolute size of difference between compared means, on the sampling variability, and on the number of samples. The demonstration of statistical significance in scientific articles and project reports is commonly a fortuitous outcome of arbitrarily chosen sample size in relation to sample variance and the nature of the experimental or managerial treatments, with no consideration of what constitutes a material difference. For example, the mean soil P of 1 mg/kg at site A might – by the sampling design used – be statistically different from mean soil P of 2 mg/kg at site B, but the difference would be immaterial in the context of growing a pasture of subtropical grasses which needs a soil P of at least 15 mg/kg. A better way of proceeding is first to determine the amount of change in the parameter of interest that is material. Then a pilot study should be undertaken to determine sampling variation (Figure 1).

Figure 1
figure 1

Schooling student in determining the vagaries of sampling. Designing an appropriate statistical test requires pre-specification of the size of material difference and knowledge of sample variance in order to calculate the requisite sample size.

The appropriate sample size, N, can then be estimated by calculating

N = 2 x s 2 x t 0.05 2 / X 1 - X 2 2

where s2 is the variance determined by the pilot study, t0.052 is the value of t in Student’s t- distribution for the appropriate number of degrees of freedom and X1 - X2 is the chosen size of material difference. To illustrate, in the above example of rating grass cover, given X1 - X2 of 0.5, with s typically at 0.5 and the t- value assumed equal to 2, N works out at five 30-point samples. It arises that the requisite data to test for a material difference in grass cover can be collected by one worker in less than an hour. This compares favourably against Tidmarsh Wheels and Levy Bridges that might need to be carried kilometers by a team up hill and down dale, operated on steep and rough terrain, and for which thousands of point observations might be necessary to assure confidence in results.

Parsimony

A cornerstone of science and technology is parsimony. No description, experiment, explanation, hypothesis, idea, instrument, machine, method, model, prediction, statement, technique, test or theory should be more elaborate than necessary to satisfy its purpose.

On the precedent of reviewing countless postgraduate theses, scientific articles and technical reports, the body language of the writing, as judged on such criteria as parsimony, reflects the effectiveness of the execution, be that experimenting, modeling or monitoring or whatever. Where the writing is not concise and precise, digging deeper in the document usually reveals the same shortcomings in the planning and execution.

The ground rules to modeling capture the essence of the point here. The rules are very explicit: start simple, the simplest model that might work, and add complexity later if necessary (Starfield et al1994). There is no logic to starting with a complicated model. For example, where would one start? There is no upper limit to complexity. And it would be wasteful to include any more than the essentials. The raison d’etre of modeling is to exclude all but the bones of the subject issue. Yet in project after project what happens? The cognoscenti propose methods and models of amazing complexity, with all kinds of bells and whistles. Maybe the consultants and contractors, having vested interests, want to make the job big and complicated to seem clever or make more money. Examples in southern Africa include rangeland and river condition assessment and management. The condition measurement involves, among other things, identification, if not measurement of abundance, of, for example, every plant, macro-invertebrate and fish taxon. This is unrealistic for many reasons. The ordinary professional can master identification of a few species, but few people are experts on a whole biota. Even for the few prominent species, knowledge of response to perturbation is skimpy. For the many lesser species we simply do not know so their inclusion in a model just adds noise. In the developing regions of the world the taxonomy of the biota is uncertain, and it can be difficult to put a name to many a specimen. Within a group, like plants, species vary in form and abundance, so the one-size-fits-all sampling method doesn’t produce reliable data. Then unreplicated sampling is done at fixed sites, of unknown representativeness, as explained above. Regardless of the uncertainties, the results are plugged into the great model that spits out a condition which is of unexplored sampling variability. If the complicated exercise were repeated tomorrow, or 50 m away, or at a randomly selected sample site, how different might the result be? How different must the result be to matter materially? In few cases does anyone know. Cost-effectiveness, and likelihood of ever being actually applied, could be improved by simple testable models involving a few indicator variables for which can be designed specific measuring techniques to yield reliable data. In most projects that I review the monitoring recommended by the consultants never gets done because it is too all-embracing and complicated and at a cost outweighing the benefits that could accrue if the consultants’ recommendations were followed.

Simplicity of technique in modeling or monitoring or whatever is not the same as being simplistic. To the contrary, devising a simple and effective method is really challenging. The applied situation often has severe constraints, including limited resources of time, budget and manpower, and events that threaten the best laid plans. Because one is in the applied situation, where unanticipated costs can run into US$ billions, does not mean any lesser standard. Indeed, ‘done at the highest intellectual level’ always applies.

This leads on to Taleb’s notion of antifragility (Taleb2012). I say Taleb because he popularized appreciation that a system has a greater or lesser capacity to withstand shocks, and this capacity can be increased by exposure to shock. In fairness though, Holling and coworkers at the University of British Columbia developed ideas of system resilience and adaptive management in the 1970s. These phenomena apply across organizations from the human body to whole biotic regions. A few examples are as follows.

In the case of the ‘smaller’ organization, the marathon runner trains his body by stressing it. With a little stress the body responds and increases its capacity to tolerate more stress. If the stress is extreme and continued then the body fails – injury occurs. The athlete must walk the tightrope between over- and under-training. In life there are many applications of this principle of increasing our personal capacity by stressing the body thereby training it to improve performance. In the case of the ‘bigger’ organization, the African savanna and steppe have an amazing capacity to tolerate stress and disturbance, in the form of recurrent drought, fire, flood and herbivory. Earlier botanists working in South Africa considered that present grassland and savanna areas in the moist eastern regions were forest and scrub forest as little as 800 years ago, and it was only after the arrival of the Bantu that the woody vegetation got opened out by chopping and burning (Acocks1975). However, it turns out that much of the savanna and steppe has been burnt every year or few by lightning or man and his predecessors, for millions of years – how else do we explain the biodiversity of the systems, the fire adaptations of many of the organisms, and the seeming need for the system to be burnt periodically that its biodiversity not be lost (Ellery and Mentis1992)? Of course it is common knowledge that forest does not ‘bounce back’ like steppe and savanna when defoliated, and at least in some parts of the world forest patches are refugia. But even here there are interesting contrasts. Along the northeast coast of South Africa, and going north into Mozambique, the dune forest has remarkable recovery capacity, converging on climax species composition after disturbance (bull-dozing or dune mining) in as little as 54 to 70 years (Mentis and Ellery1998). Perhaps unlike most forest regions, along the southeast coast of Africa chronic disturbance has been a feature over recent evolutionary and ecological time. Sea level has risen and fallen as a result of monoclinal titling and global warming and cooling, and tropical cyclones and fire have occurred with short return periods. In consequence, dunes have been built, vegetated and destroyed frequently, and the dune forest would be expected to be resilient to disturbance of this nature.

In 1985 when I moved to the University of the Witwatersrand my predecessor, Brian Walker, now at CSIRO in Australia, mentioned that a high proportion of his postgraduates did not complete write-up of their studies on schedule. Evidently EGAP (everything goes according to plan) is an optimism not only misplaced in the world of business and infrastructure projects, but applies also in science and academia.

Scientific research, and its write-up, deliberately pursue uncertainties and unknowns, and therefore are predisposed to risks of EGAP failing. How does one satisfy the research supervisors and the funding agency that progress is being made, that the project is not falling behind schedule or off-target? One option is to plan and execute the research in a succession of small steps. One of my students, Christo Fabricius, adopted this approach. In his research into habitat suitability for a widely distributed antelope in southern Africa, the kudu (Tragelaphus strepsiceros), Fabricius first collected data on a wide range of plausible habitat determinants in relation to kudu presence and absence. He analyzed crudely using multivariate techniques of the correspondence analysis/factor analysis type. Having identified variables with the highest correlations with kudu, he reduced the number of variables and collected better data. The study proceeded by these such successive approximations. The write-up then comprised stand alone ‘chapters’, yet the whole gave the history of discovery. A famous case of this approach was that of the grouse (Lagopus lagopus) research unit on the heather moors of the Scottish Highlands where initial hunches framed research which led to formulation of hypotheses that were then tested yielding more refined hypotheses for the on-going investigation. The successive publications over time told a fascinating story of unfolding knowledge of the determinants of grouse abundance.

Young researchers are misled when they read superb write-ups of scientific studies in foremost journals. The impression is of once-off brilliant design, expeditious execution and efficient yield of significant results. Not revealed by journals is a much messier reality, as described for example by Watson (1968). In our study of dune mining and forest recovery, Fred Ellery and I had to rerun our data collection and analysis to rigorously test the effects of mining (Mentis and Ellery1998). A referee advised that we should regard mining/no-mining as a dummy variable and then use multiple regression to see which of the many independent variables (including the dummy variable) significantly affected the dependent variable (species richness). I had heard of dummy variables previously but never seen a practical example. The application in this case showed mining was not different from other dune disturbances, and the overriding determinant in forest recovery was time since disturbance. Another interesting personal experience was in trying to test formative causation proposed by Sheldrake (1987). The hypothesis proposed that the first occurrence of an event created a precedent and thereafter, by a force called morphic resonance acting across any space, the event happened more easily. For example, after rats have learned a new trick in one place, other rats elsewhere seem to learn more easily. To materialists this might seem implausible, but even physics, the bedrock of materialism, considers strange forces such as entanglement about which Einstein quipped ‘spooky action at a distance’. My test of formative causation was to have students learn random sequences of letters, test the students, provide a second set of randomly sequenced letters half of which had been learned before and half unseen sequences, and test again to see if recall on previously learned sequences was better than sequences learned only once. I thought my test was definitive. But it failed. While most of the students learnt the random sequences in the rows of the matrices provided, as I wanted them to, a few ‘Feyerabends’ memorized the letters in columns, which I did not anticipate. My conception of the method of testing was at fault, and my intended statistical analysis invalidated. Of course I intended to repeat the test with better design, but yet again EGAP failed because I left the university and was denied easy access to cheap and biddable study ‘rats’. Failed EGAP must happen often in research, requiring re-plan and re-run of the test. Such actual course of science is rarely documented in formal scientific writing. It is no wonder that learner scientists have the mistaken expectation of the once-off experiment or test that will quickly produces a publishable result or award of a degree.

Evidently the trick with science is to iterate ‘plan-do-review & revise’. The shorter and more frequent the iterations the less likely are unknowns to allow the researcher down blind alleys. Of course the ‘plan-do-review & revise’ applies not only to scientific research but all kinds of projects, if not to life itself. The notion of the project plan, or life plan, being a once-off definitive blueprint is illusory. No one can predict the future, and the bigger the future – as in the bigger the project – the more likely it is that EGAP will fail. This is not reason to abandon the plan. On the contrary, one must start with the best plan that current knowledge and circumstances allow. Then this ‘best’ plan must be updated, by the iterated ‘plan-do-review & revise’, at frequent intervals.

Naturally the plan cannot afford to be a 500 page treatise that takes a year to revise. The compilation needs to be an expeditious succinct statement of – depending on context – objective, constraints, hypotheses and risks, and of the appropriate actions, controls and tests. To make it work there must be targets, the targets must be measured, and the folk involved must be rewarded for achieving the targets. The presentation – writing – of this is very demanding.

But alone this repeatedly updated plan is not enough to excel, even just survive, in Taleb‘s ’world we don’t understand’ or the United States military’s VUCA (volatile, uncertain, complex, ambiguous). Whether it is a business, infrastructure or research project, or even the individual person, it will benefit from being inured against ‘the slings and arrows of outrageous fortune’. How might this challenge be approached? How is antifragility enhanced? Deliberately exposing the system to stress, and running antifragility drills (cf. emergency drills), are advised. But as entertaining and informative as Taleb’s books and articles are, there is limited guidance on how to design organizational structure and function to avoid going belly-up in the face of turbulence. Can we remedy this? In Table 1 below fragility and antifragility are juxtaposed on a number of structural and functional dimensions of systems such as business, infrastructure and research projects.

Table 1 How to survive a turbulent world

The reader is now invited to consider how to design, execute and write up his or her next project, or personal plan for the next year.

Discussion

The thrust of argument in this article is that technical writing, and the action or system that it describes, must be purposeful, problem-orientated, testable or repeatable, parsimonious and antifragile.

None of this is new, but surprisingly often – even in the case of megaprojects in which the stakes run into US$ billions – the minima are insufficiently applied in combination, both in conception, design and execution on the one hand, and in critical evaluation on the other. The typical project – be it research or real world – is too accepting of EGAP. The oversights arise in several ways. First is the inevitable caprice of random variables. There is super software such as @RISK (http://www.palisade.com) for dealing with this, but in the more than 20 years since I was introduced to this I have seen it used on projects only twice. Second, there are the non-random variables (streamflow is an example), non-linearities and contingent events that prompt Black Swans (unpredictable events of big consequence) that require antifragile properties to be designed in. Yet the norm is complex projects with critical interdependency of components and requiring omniscient and omnipotent demons to manage them for which MLD is ‘too big to fail’ with ‘good money thrown after bad’ in vain effort to rescue the image, pride and project. Third, and on a different tack, the project proponent does not engage the stakeholders and get their buy-in. The authorities and proponents reserve decision-making to themselves for they know what is best for us. They might inform the stakeholders, and invite their comment, but rarely do they obtain unbiased opinion by statistically designed surveys, engage in dialogue one-on-one or in forums, facilitate meetings of key stakeholders to have them decide their priorities, and involve them in decisions and implementation. The results are products that don’t generate the forecasted revenues, trains without enough goods and passengers to make them pay, a road where it is not wanted and no road where it is wanted. There are some excellent guides on how to reduce these problems (Porter and Kramer2006; Decker et al2012). And of course the critical commonality is communicating, for which skillful writing is an indispensable part.

Conclusion

In conclusion, writing remains the key medium of communication and the link between initial conceptions, plans, execution, outcomes, experiential learning, and the next project. With knowledge exploding, and there being ever more theories and facts, writing hasn’t got easier. The challenge is to compile the succinct purposeful problem-orientated reliable parsimonious message. Developing nations and their young talent can surely do a better job than is the current norm.

References

  • Acocks JPH: Veld types of South Africa Memoirs of the Botanical Survey of South Africa No. 28. 1975.

    Google Scholar 

  • Decker DJ, Riley SJ, Siemer WF: Human Dimensions of Wildlife Management. 2nd edition. Baltimore: Johns Hopkins University Press; 2012:286.

    Google Scholar 

  • Ellery WN, Mentis MT: How old are South Africa’s grasslands? In Nature and Dynamics of Forest-Savanna Boundaries. Edited by: Furley PA, Proctor J, Ratter JA. London: Chapman and Hall; 1992:283–292.

    Google Scholar 

  • Feyerabend P: Against Method. London: Humanities Press; 1975:336.

    Google Scholar 

  • Fleyvbjerg B, Bruzelius N, Rothengatter W: Megaprojects and Risk. Cambridge: Cambridge University Press; 2003:207.

    Google Scholar 

  • Kahneman D: hinking, Fast and Slow. London: Penguin; 2011:499.

    Google Scholar 

  • Lesmoir-Gordon N, Rood W, Edney R: Introducing Fractal Geometry. Cambridge: Icon Books; 2006:176.

    Google Scholar 

  • Mentis MT: Evaluation of the wheel-point and step-point methods of veld condition assessment. Proc Grassld Soc Sth Afr 1981, 16: 89–94.

    Google Scholar 

  • Mentis MT, Ellery WN: Environmental effects of mining coastal dunes: conjectures and refutations. S Afr J Sci 1998, 94: 215–222.

    Google Scholar 

  • Platt JR: Strong inference. Science 1964, 146: 347–353. 10.1126/science.146.3642.347

    Article  CAS  PubMed  Google Scholar 

  • Porter ME, Kramer MR: Strategy and society: the link between competitive advantage and corporate social responsibility. Harv Bus Rev 2006, 2006: 78–92.

    Google Scholar 

  • Sheldrake R: A New Science of Life: The Hypothesis of Formative Causation. London: Paladin Books; 1987:287.

    Google Scholar 

  • Starfield AM, Smith KA, Bleloch AL: How to Model It: Problem Solving for the Computer Age. Edina: Interaction Book Company; 1994:208.

    Google Scholar 

  • Taleb NN: Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets. London: Penguin; 2004:316.

    Google Scholar 

  • Taleb NN: Antifragile: How to Live in a World We Don't Understand. London: Penguin; 2012:519.

    Google Scholar 

  • Watson JD: The Double Helix. London: Weidenfeld and Nicolson; 1968:235.

    Google Scholar 

Download references

Acknowledgements

I thank Klaus von Gadow for prompting this article, and ‘Student’ for Figure 1.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mike Mentis.

Additional information

Competing interest

The author declares that he has no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Mentis, M. Science writing in the real world. For. Ecosyst. 1, 2 (2014). https://doi.org/10.1186/2197-5620-1-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2197-5620-1-2

Keywords