The Greedy Clearing Algorithm and Conservation Policies

Environmental policies to retain natural vegetation do not work particularly well, and offer little long-term hope of expansion and restoration of natural vegetation. The reason for this is a pattern in the general logic of clearing applications that I call the ‘Greedy Clearing Algorithm’. It goes something like this:

The Greedy Clearing Algorithm
Whenever you are concerned with whether to conserve or destroy a place or thing, ask whether there still exist some remnant of that place or thing somewhere else. If there are remnants somewhere else (e.g. in a Reserve area), proceed to clear.

original (Craig Duncan).

The definition of a Reserve (or a protected space) is often simplified as a 2D space on a map, and that sets the boundaries for interference from the outside. Setting boundaries changes the way we interact with the environment. Decisions may end up being based on general rules, and less on local observation and sensitivities.

If you do not have the planning documents in front of you, the realisation of what is happening might be somewhat delayed by the fact that we do not jump immediately from position (1) to position (6). What’s also not apparent that as clearing proceeds, species will tend to flee the damaged areas on the periphery of the cleared area, or there simply won’t be any guaranteed habitat. All that happens is that it the ultimate clearing takes time to manifest itself.

No real protection by ‘self-designated’ reserves within a clearing zone

Most commonly, clearing decisions assessed by local authorities are being dealt with by considering only specific reasons for local restraint, like some concern related to threatened species, or some significant environmental impact. The policy is designed around only considering these issues after they seem to become an issue for an economically-focussed activity. For example, under Koala protection legislation and regulations (at least in the last decade or so), the protected areas might be determined on a case by case basis and only when development approvals were undertaken.

In some Australian States, it is up to the logging industry to manage a forest’s clearing, which effectively makes the local authority responsible for protection of the species within that environment. As a result, the evidence needed to set a local ‘protected area’ (unable to be cleared) may be gathered by people with little interest in doing so.

Not surprisingly, the observations about species and their habitats are not likely to be as thorough and well informed, or with consideration of confounding factors as those who do not have a vested interest in clearing. Confounding factors may be that anywhere near where clearing is occurring, the animals tend to hide or flee the interference. The logging activities can increase the evidence of ‘absence’, which in this bizarre world of needing to look for a reason not to clear habitats, just makes it easier to clear them.

In the absence of observations, or a genuine interest in time-consuming observations, it’s much easier for industry to put time into ‘desktop’ intelligence, which is a substitute for doing real science or genuine observations. This involves using a rule or scientific ‘model’ of where animals might live, and using that as the basis for decisions. The model involves a formula that tries to predict where animals will live, or how many there are, even though you don’t actually know. Unless these models are tested and shown to be useful (which would require observations anyway), they end up being worth very little, because they are not actually scientifically justified.

Within a clearing zone, leaving clearing to those who are ‘hungry’ for more clearing will tend to make them minimise the areas that they self-designate as ‘protected’. They adopt abstract, rigid boundaries for protected zones that are only symbols of protection, when they do not achieve that purpose, because they are too small and arbitrary. They do not answer the question – is there adequate protection for the actual extant population of the threatened species?

Example 1

Here is a recent example of how someone uses the Greedy Clearing Algorithm to justify clearing of an area that is visited by Black Cockatoos (obtained from looking up the EPBC Act notifications, as the EDO offices do).

Source: EPBC Act Referral 2020-8620

In paragraph 3.5 of that referral is states “The vegetation is not representative of a Threatened or Priority Ecological Community.” This illustrates the other trend in clearing applications – that the more common something is (whilst we consider it arbitrarily plentiful), the less concern there is about removing it.

The focus on the extreme (threatened species) means that there is a positive reinforcement for clearing whilst it remains within the ‘normal’ range of retained vegetation or fauna, whatever that means. That is, it is only when we impose a last-resort protection e.g. we reclassify a species as “Threatened”, that the inevitable clearing that would otherwise occur is restrained. There is no pre-emptive (precautionary) restraint on clearing, in non-emergency circumstances.

Example 2

Here’s a similar example of clearing over a short period of time (months) to the time-based comparisons given above, drawn from the recent NSW Legislative Council enquiry into Koalas ( page 37):

Figure 5, page 37. koala hub in Wang Wauk State Forest in May 2018
Figure 8, page 37. koala hub in Wang Wauk State Forest in Dec 2018

In the case of koalas, conservation mapping tools and ‘offsets’ are used to try and provide a more general planning tool, but do these work? Are the maps effective? The reality is that mapping is only the first stage in assessment, and that protections might not actually be sufficient if there is ambiguity or uncertainty in what areas could possibly count as reserves. The difference between policy and reality must always be borne in mind.

Back in 2015, concerns were raised about the paucity of mapping of Koala habitat in NSW.

In 2016 Koala protection was focussed on protection of Koala Habitat in NSW State Forests, rather than rural areas. Attempts to map Koala habitat within those forests commenced only in 2016, with a pilot study.

In NSW, there is now the concept of ‘Core Koala Habitat’ and “Koala Habitat Protection’ which essentially works in the same way as the ‘Reserves’ idea, but there is far less certainty of whether particular mapped areas qualify for higher protection.

But in the last 3 years, massive areas of Koala habitat have still been cleared in NSW and Queensland. Even more recent media reports suggest that State-based clearing of Koala habitat is still proceeding at a rapid rate. Last year, it was suggested that clearing decisions in Queensland by its EPA were being made based on precedents and not current information.

Some criticism has been directed to the fact that the protected areas are subject to exemptions that subvert the purposes of the planning regulations, including rural land exemptions. e.g. EDO. EDO observations in its “Submission on the Review of the Koala SEPP (State Environmental Planning Policy 44 – Koala Habitat Protection)” noted that the definition of koala habitat was complicated and might not be sufficient to capture all required areas. The definition “required evidence of a resident population of koalas, with such attributes as breeding females with young and a historical record of the population”. In some respects, it required the kind of proof of continuity that might arise in native title claims.

In March 2020 NSW issued a replacement of SEPP44 in which the definition of ‘core Koala habitat’ had been revised.

The NSW Legislative Council Enquiry (Koala populations and habitat in New South Wales) concluded, in its June 2020 Report, that one of the many problems in actually implementing these ideas was the fragmentation of areas of habitat, and the loss of habitat, particularly more mature trees through logging (paras 2.60-2.61). Compounding this was evidence that the Foresty Corp had repeatedly breached its logging requirements (paras 2.67-2.68). It is clear that private industry also contributes to decline, particularly where it takes pre-emptive mining activities in Koala habitats (paras 2.108-2.130)

In Queensland, there is also the concept of essential koala areas (Koala Priority Areas) and habitat mapping. One of the latest maps is the Koala Conservation Plan Map as provided for by the Nature Conservation (Koala) Conservation Plan 2017. The South East Queensland Koala Conservation Strategy includes details of this with the Koala Maps information.

Word Processors as a Legacy Format

Data management issues

The ‘Word Processing’ software is a legacy item of software, but it is still perceived as the foundation of legal software tools. It’s the engine for pumping out correspondence and legal documents. Software vendors still base their software around the ability to integrate, or leverage word-processing software.

As a result, we don’t have a research focus on the middle ground, where:

  • the content itself requires computational tools.
  • the data must be managed outside of the word processing environment.

When lawyers finally realise that information in Word Processors is just ‘rendered output’, and not an intelligent basis for either input or the application of data science, then we will see a shift in the type of tools that lawyers are interested in. The solution is not building tools around Word Processors, but removing word processors from the upstream work flow, and using them, if at all, for cleaning up and appearance for output reports.

It is not in the interests of the present software market to treat their own products as legacy products, or ones that need to be unpacked and treated as ‘rendering’ tools, not information processing tools.

Once Word Processing is seen as a legacy format, we will move on to tools that help communicate legal knowledge, and interact with it. In order to interact with legal information, we need algorithms that help extract information form legacy formats, and algorithms to work with legal topics in flexible data structures and interfaces. They are recipes for transforming human ‘commands’, in an interactive environment, into ‘displays’ of the relevant information.

Workflow issues

The Marshall McLuhan mantra that the ‘medium is the message’ applies just as much to word processing as any other medium. Software is a medium, and word processors are a subset of software with their own specific influence on behaviour and thinking.

Word processors push you towards laborious crafting of documents, or ‘templates’ because they are very crude when it comes to navigating, storing or referring to specific legal topics. We can’t move them around easily, share them, refer to them or benefit from computational speed when trying to do so. In fact, the idea that speed might be important (or directness), is really not part of the conversation.

Do not be fooled into thinking that cut and paste is a tool for computation. It’s a mechanical tool, designed to make human being responsible for very low-level of information transfer, and leaving the decision-making outside the computing environment. It binds your hands to the computer. That’s not how you would think in the high-performance computing world, and it’s a lesson worth applying to other areas.

Word processor tools that prioritise making electronic documents look like traditional paper ones are less interested in how we might apply computation to the contents (e.g. to legal document topics). Computation is there used to allow the simulation of a clever, style-conscious typewriting machine. It assumes that a typist or clerical worker is going to be working with this information, not someone that wants to use an interactive command set that relies on domain-specific (and therefore efficient) terminology.

A stagnant market

The market for Word Processors is such that word processor sellers want to sell the same tool to the world. It takes a lot more work, and narrows the market potentially, to make information-based tools for specific professions or industries. They have to be good enough to attract a premium (at least for established companies). For new players, new tools might be priced to be competitive with the more generic Word Processor tools.

The current environment has achieved a ‘stable equilibrium’ because there are naive users of Word Processors who haven’t had the critical pressure to evaluate what they do, and how they do it in order to raise their expectations of their software tools. Software vendors have achieved market dominance with tools split into separate buckets where text tools like word processors are inherently distinguished from spreadsheets and databases. These do not exhaust, by any means, the possible designs for data structures, interfaces or workflows. However, whilst they are prevalent, people will fit themselves to the software, rather than develop more intelligent tools to fit their work.

New tools in the middle ground

This trend toward software that stores data in appearance-focussed file formats of ‘word processors’ has influenced what people regard as the ‘norm’. It is expected that people resume the manual typing process that attempts to render output as notes on a page. This is fine if you want to write, but what if you want to manage the contents as part of repeated, information-rich data? There is no capacity to do so.

Computer scientists have paid more attention to ‘natural language processing’ or ‘search’ algorithms than to analysing the legal content to which you can legitimately apply computation and data science to achieve benefits for data storage, data analysis, and output.

The consequence is that a topic like ‘algorithms’ as applied to legal contracts or documents hasn’t received as much attention as it should. And, as yet, there isn’t an interest in high quality free, or open-source software projects, or small market prototypes, that adequately bridge this gap. It is a gap that shouldn’t be left to software vendors, but to people that are able to analyse their own expert domains and ask better questions. This will change, but first it will require promoters and people that view Word Processors as a legacy format.

Paper-based legal documents have always been static versions of an interactive medium. We don’t read them in a linear way, even if they look like that. They are structured for some interaction, but it is limited by the linear medium. The opportunities now are to escape that linear world and move into the interactive ‘conversations’ that will be opened up by electronic storage formats.

The same will occur in legal proceedings involving civil litigation or disputes. The ‘conversation’ or interactive format of paper was achieved by having a sequence of interactions – a statement of claim, and then a response in the form of a defence. This was the ‘interactive’ approach in a low-tech world. But within each of those documents there was specific interaction between particular topics or paragraphs. The true interaction between content happens at a level internal to each of these paper documents. Word processors don’t allow you to get at, access and leverage those work goals. We are going to need better tools.

Computational legal document content and data management

This note explains why I think that management of the specific content in legal documents is going to pave the way for the next big shift in legal tools, for both lawyers and customers/clients. It requires acknowledging that Word Processors are ill-suited to anything more than basic data management of legal content and that a new paradigm is needed.

Traditional drafting

The process of drafting legal documents probably only matters to those who are in the business of making legal documents, and want to think about them intelligently. Some large firms have dedicated precedent lawyer roles. The senior lawyers who then want to prepare documents for clients will use these templates, but with the intention, in many cases, of delegating the drafting work to more junior lawyers. They want to use the templates so that they don’t have to explain everything in detail, and any variations will be minimal. Even with a degree of independence given to the junior lawyer, instructions might be simplified to something like this:

Here’s a template, copy the structure and make what amendments you think you need. Check your clauses match the clause library, and make sure you’ve got all the definitions that are needed.

Templates might be written with preparation or use notes, but often not on the assumption that you want to take a higher-level, more generic approach to contractural drafting.

Most law schools do not spend a great deal of time on practical instructions like drafting contracts and they know that the precedent books and law office templates exist anyway. They focus on the meta-data that exists as contract law – the rules that tell you what a contract is – its essence, and whether it exists, and for how long. In so doing, they tend not to focus on the recipes (algorithms) needed to do any drafting.

Once we start to think about the contents of legal documents as data, we might want to know what algorithms the lawyers (experts in their domain) actually use to work with legal information. We can’t just ask them how they use Word Processors, because that just described how they fit the tools they are given. Lawyers adopted ‘desktop publishing’ tools when they moved from secretarial assistance to their own desktop computers and the use of the ‘word processors’. But that’s not how they work with the content in their minds.

Contracts as template documents

Contract drafting is a bit like following a recipe. The basic approach to writing these documents (according to some unspecified recipe) is, however, quite different to how the results are stored. The results are the finished work, in a fairly rigid and static form. Lawyer’s templates come from an age where paper was used, and paper isn’t an interactive medium.

Templates are not just single examples of work, that do not connect to anything else. Templates, particularly the conventional electronic word processor files are:

  • high-frequency data containers that form the mainstay of both legal business and the contract management function in large organisations.
  • tools for answering questions about legal topics. They need to be read and used by reading in different ways
  • data containers that might need to add or subtract content. They may need to be customised depending on the purpose, and who is using them.
  • data containers that need to be updated when the law changes.
  • created using a recipe for the content, and using expert domain knowledge.
  • data containers that have some content in common with other templates.
  • intended to be read interactively and in a non-linear way by people.
  • part of the knowledge base for firms that work with legal agreements.
  • based on knowledge of the law, and decisions about drafting.
  • captured knowledge which may itself reflect implicit knowledge that needs to be passed on to other drafters or users.

If these are electronic documents, there is no reason why they shouldn’t be able to store data in intelligent structures, and permit computation that assists with how they are to be used.

Who cares about these templating concerns?

I believe customers and lawyers will care a lot when they see the advantages of moving on from Word Processors to store legal information, and start using computers intelligently to store, analyse and modify document content, including meta-data. For the first time, they will be working with data that is stored in a way designed for computation and manipulation, and not in passive templates.

The paradigm shift is to attempt to work with document contents as the primary data, in a way that separates data from code or markup(styling), and allows intelligent programs that can manipulate that data directly.

Styles can be applied to data for output purposes, but they should not define the data. The data should not be referenced indirectly, for example by where or in what format, heading level or style it appears in a Word Processing document file format. The mapping of styles to ‘heading levels’ still does not define individual units of data in an unambiguous way, where they can be referenced and manipulated independently.

General disadvantages of Word processing software

The ‘Word Processing’ software is a legacy item of software, but it is still perceived as the foundation of legal software tools. This view will need to change.

  • Surprisingly, word processors in current software aren’t much different than paper, if you think about how non-interactive they are. It only seems there is a lot of interaction, but this tends to be about key-presses needed to get input and output done.
  • They amount of typing (individual alpha-numeric key-strokes) that is needed for input/interaction when using a word processor is huge compared to the benefits of some other interactive computing function that acts as a time-saver, or manipulates data in some way. {To do: quantify this. Most likely there will be thousands of words, tens of thousands of keystrokes that surround a few menu selections} Even the ubiquitous search function is often used only as a means to facilitate further key-strokes or changes to individual parts of text.
  • The Word Processors that are used to store templates in specific document files were never designed as diverse knowledge repositories. They were not designed in a way that different types of content could be classified and hidden easily (e.g notes, comments), or extracted as individual blocks, or used across different documents.
  • It’s not the best approach to assume that external document management systems, that try and store individual documents in word-processor formats, with meta-data, can overcome the lack of attention to document contents. This approach fails to provide any real ability to work with document contents as the primary data.
  • There is still a close connection between the physical disk file that stores a document and the document as a ‘record’.
  • It’s not in the design assumptions for individual documents to all exist in some kind of in-memory database.
  • It’s not part of the design assumptions to link information for use cases of one or more documents (like letters, and legal documents), that can be saved and retrieved.
  • If the software allows in-text notes, or notes that were themselves capable of data analysis, they’d still be stuck inside a single document, that is loaded up on a case by case basis.
  • Notes and comment ‘bubbles’ are still very much inserted into a document on the assumption that the document is a stand-alone creation.

We are going to need to understand how documents are put together in terms of content, not in terms of just some simply styles or templates. We are going to need to think about how the content of multiple documents can co-exist in an in-memory database, not just in separate files on a disk. We are going to need to know how to extract data from legacy formats (the current Word Processors), in order to efficiently work it into new databases and data file formats.

Algorithms in legal drafting

The algorithms for drafting, if we are to speak about them intelligently, will also need words for the units of data or information that we are manipulating, whether it be contract clauses, or definitions, or quantitative information. We shouldn’t be writing custom code for a single document and its unique contents – we should be acquiring conceptual tools to help us write programs that separate data and code, and allow us to work across many similar documents, or identify why they are unique.

Legal drafting is easily amenable to a more theoretical abstract model, if you start with a different set of questions, like – how do I go from inputs to outputs, when I do the drafting? What instructions are needed? How is information transformed? Which information is capable of being treated as a ‘unit’ of information etc.

Those who are more interested in abstract models of the content of documents (software developers, especially for document creation), haven’t had the benefit of thinking about it in terms of the subject matter (law) and subject matter expertise. As a result, contract drafting and legal automation tools tend to focus on stylistic elements, rather than content.

Workflow disadvantages when using Word processors

Word processors were always about single documents, not bulk data management and analysis. Documents are labelled and stored in document libraries, as if each is a unique book. There are databases that handle document management by providing ‘meta-data’, but because the data is held in Word Processor formats, it isn’t easily accessible for specific content data analysis. The meta data might have to be inserted manually too.

Even in this ‘desktop publishing’ environment, the labour-saving tools in Word Processors (e.g changing font, or style) are themselves manual tools. The tools do not provide a user with much ability to apply sophisticated data analysis tools to the content of the document, or its structure. Mental effort is always needed (and subject matter expertise is non-existent). The result is also something that is held in the same form for both input and output.

What if you want to work smarter with all your documents, and not craft each one as if it existed in its own box? It’s very easy for people to accept the only option (cut and paste) to move things around. But when you do that, you’ve given up on doing it a better way.

There is a gap here (probably one that the market hasn’t worried about too much). The topic of ‘data science’ as applied to the content of legal documents has never really existed. Word processors disguise this big ‘gap’ in the flow of information. They always rely on the human authors configuring the information content for each document as it should appear in the final output. We cannot achieve any kind of interaction with our text data, or space for computation or data analysis, without creating a gap between input and output. It’s the basic requirement for computation. In Word Processors, that gap does not exist.

If the Word Processor encourages your input to ‘look’ like your output, then you avoid computation because the input is the output, and you have to do all the work yourself. This is not really that much different when ‘templates’ that allow data inserts are used, because 99% of the content in the template is not involved in any computational process either. Templates are a way of avoiding the need to think about information, but often you have to, because of the nature of the work. So they appear helpful, but you soon run into new problems.

A very small gap between input and output exists for mail merge programs. Mail merge still uses a simple ‘merge’ in place of intelligent data storage and functions. It splits data into (a) the small pieces that may change and (b) everything else. It is not a particularly intelligent approach, and though it may improve the speed of generating relatively static documents for customers, it doesn’t improve the tools that might be needed by the professionals who craft documents every day. It doesn’t help manage the content data in the template.

Data management improves customer service and value

Data management is linked to contract management, to document design, and customer needs

If we wanted to think about data analysis tools that would be in demand, what questions might lawyers or customers want to answer? e.g.

  • Is this document a lease or contract of sale? What topics does it cover? Can the customer load the document in, and have some basic ‘dashboard’ information get put up (whether it is in a legacy format or not). Can this be stored as new metadata?
  • Can you show me the topics in the order that I need to read them?
  • Which of the definitions is a concept I need to read first?
  • If I want to insert some extra information about topic X, can I easily do so?
  • Can I update this clause in all my documents?
  • Which are the most variable topics when I deal with my clients?
  • How can I get some statistics about the use of a particular clause in my business, or the use of very similar clauses that are on the same topic?
  • Can I attach the same cover letter to all the documents of a particular type, and insert the client information as needed?
  • Can I pull up the main data for all my client’s contracts, see what the specifications are, and also what should be the current status re financial payments, reviews, unexpired options etc?
  • How will I be able to check if I refer to a particular piece of legislation in my agreements?
  • In my suite of client documents, what are the similarities and differences?
  • Can I flick between a graphical representation of the document structure and the text?
  • How does my linear set-out of the document compare to the more tree-like structures that reflect the real relationships between the topics?
  • Can I present to the client the document in a summary form for presentation, then drill down and edit specific topics as they make suggestions?
  • Can the system make useful suggestions at this point, so that removing or adding components prompts for additional information (at a high level, content focussed), that need to be included?

Some of the answers to these will promote ideas about how to get better information out of current documents, which clients or lawyers can use, and also inspire ideas for how to store document data and meta-data in new formats.

Usability/design considerations

Or what about these questions, where we might be able to gather hard data if the data was stored in a way we could attempt to analyse its use, instead of just looking at it:

  • How often is this particular clause used in documents in different subject areas?
  • How often do I ask whether my documents are using a particular clause, or trying to compare two clauses?
  • How often do I want to see the relationship between my clause/topic and its definition?
  • How often do I find myself reading back and forth between concepts because the way I have written my document requires me to revisit information in a very unhelpful way?
  • What is the most logical structure for setting out the concepts in my document?

Machine learning pitfalls

What about machine learning? You might be asking, since everyone else is these days. The difficulty with assuming that some kind of ‘machine learning’ is useful, just because there is ‘learning’, is that it isn’t looking at the actual work practices of lawyers, or even what information is used to transform inputs to outputs, and what kind of instructions are needed. The algorithms of machine-learning are more heavily geared toward statistical analysis of information, and for the software to use correlations between different topics.

Machine learning often requires a very specific problem, and it produces a very specific solution. It can mean that you system is trained for a very specific purpose. If you have to train a machine to recognise correlations, it usually means you don’t have a lot of flexibility in that system. It favours ‘yes’/’no’ type results – is this a picture of a cat or not? That’s essentially the most basic model of computation – a boolean truth statement, about a single question.

Many people also get the idea that if we use ‘computers’, we have to try and shift the entire human role off to a computer (i.e. ‘AI will make lawyers redundant’). This is a fairly naive view of how algorithms can or should be used to help with existing workflows and behaviour. It’s a naive view of how we go about developing tools using computers at all. AI is also a broad term but it can encompass both expert-reasoning systems that process information automatically, and it can encompass reasoning systems that also achieve greater problem-solving by absorbing data and being able to apply statistical analysis to it (machine learning).

Machine learning uses algorithms, but it doesn’t necessarily help us understand what algorithms humans are using.

Why not store documents as code?

Storing documents with the addition of code for ‘automation’ does not really achieve the gains that we should expect from treating the contents as data directly. Algorithms should apply to data that can be loaded up as data sets. However, some trends in document automation (particularly of contracts) make it harder, not easier, to apply an algorithmic, data-orientated approach.

The current approaches to using logic, or code, to influence the content of legal documents inside Word processors has included:

  • storing document templates, and then using code to influence a mail-merge operation.
  • inserting code inside the word processing content itself, so that it is processed ‘in-situ’, to influence what content is ultimately displayed in a word-processing file or not.

The code wrapping solution generally seems to work for individual documents (after a lot of setup), but it has limitations and is hard to scale. It doesn’t encourage a data management environment that is different to single-use word processing. In fact, it might encourage trying to insert more conditional data inside a single container (WP document) because there’s no other way to hold the data for similar, but not identical documents.

Code is not a particularly good data storage format, with the ability to apply data analysis to its contents. If documents have similar data, then having each separately coded makes it even more difficult to deal with that data (and encourages people to shove more code around the data, in a single code file).

Having that code inside a word processing application further increases the barriers to understanding what it does. It is a layer built on top of a raw format (like OOXML) that has nothing to do with the types of data in the document content.

Some of the problems with using the ‘computer programming’ metaphor by wrapping code around word-processing data are the ultimate cost, efficiency and limited data exploration options, i.e. :

  • It is like having an object-orientated approach where you assume you will only have one set of data for your code.
  • There is no clear separation of code and data, even for a single document.
  • You have word-processing formats used as both intermediate and final formats (this is not space efficient and it doesn’t really improve the ability to analyse the contents as data). This can easily confuse the users who want to describe their documents as ‘templates’. Which one is it?
  • The content data you have is still inside a single bloated container that assumes it is unique, and stored as a disk file.
  • There’s still no opportunity to hold lots of documents in memory at the same time, or easily link them to each other.
  • It forces code into the content environment, which has negative consequences for those updated templates. It creates a higher overhead in terms of needing an in-house coder or consultant to make changes. It tends to further separate the data from the professional users, without providing any direct advantage to them for single-use situations.
  • It is ultimately still a lot like a mail-merge algorithm, except that some of the content that appears in the final rendered output is conditional on what kind of data is coming in.
  • The design of the software may also be based on assumptions about the availability of staff in certain roles, like data input, content updating, or coding.

Conclusion – data management of content is the way forward

The way forward? Lawyers need to be able to explain what they do (their data algorithms for content), and in terms sufficiently nuanced that they will lend themselves to computational analysis.

We are going to need people trained and experienced in both fields to lead the way. Once we have that, then we’ll be able to introduce a new paradigm, in which legal content is itself units of data, that is manipulated in software applications that treat it as units of data, and not merely some text laid out on a page.

National Environmental Standards – Part 1 – Models and data

The ‘centrepiece’ of new Environmental Law recommendations is a set of National Environmental Standards. See the interim report.

Interim Report Highlights

This wide ranging report has a whole lot of recommendations that seem to be reflective of how we transmit information between ourselves now, in the twenty-first century, and so have a kind of odd structure. I’d like to think that the first question we ask is when considering the care of the environment is – what is life, and how do we preserve it?

It strikes me as strange that the EPBC Act does not talk in terms of life and death, but that’s really what it should be based on. What is an ecosystem, if not a living system that we must talk about in terms of life and death? How does it survive and thrive, or die and disappear? These are ultimately the questions for us. The questions that environmental protection might ask, like how might we reduce the damage, should be answered by some everyday, pragmatic wisdom: hacking living things and soil to pieces, with modern machinery, and it will often die. This realisation is shrouded in a lot of talk about data, models and markets, and a lot of politically satisfying talk. If legislation is written mainly because it sounds nice to the ears of those that write it, then it may still not achieve its purpose.

This set of recommendations places its faith in science, and in the magic of information to achieve miraculous changes in behaviour. In doing so, it fails to come to terms with its disparate objectives – to bring communities together in terms of understanding the need for environmental protection, but also to achieve a kind of technocratic oversight in the collection of information and in predictive scientific models (see Part 3). Has no one explained how many choices and assumptions are needed for these models? It’s foreshadowing that the saviour of the environment is a data economy and bureaucracy. Time will tell whether the detailed review of that new vision will lead to operational, on-the-ground changes. As I will illustrate in Part 3, at the moment there are some classes of people who just don’t know where to find and understand the EPBC Act anyway. Perhaps that why people think we need a centralised environmental police force (in the form of a single environmental regulator).

Some of the recommendations of Professor Samuel, in this Interim Report into the workings of the EPBC Act, imagine a world in which nerdish bureaucrats hunched over computers will come up with scientific models that illustrate the calamities that might ensue from another prognosticating approval decision. In ‘review-speak’ this kind of recommendation is written like this:

To apply granular standards to decision-making, Government needs the capability to model the environment, including the probability of outcomes from proposals. To do this well, investment is required to improve knowledge of how ecosystems operate and develop the capability to model them. This requires a complete overhaul of existing systems to enable improved information to be captured and incorporated into decision-making.


We should be very skeptical of calls to carry out more modelling, and to scrap everything and start again, in the vain hope we will do it better next time. Modelling, in the right circumstances, is a classic example of garbage in, garbage out. Worse, it oversimplifies complex systems using too little data.

It’s not new to say that we have scientific models to help model the environment, and it’s not even new to say that the models we already have might not actually work. However, recent evidence to Courts and enquiries in relation to habitat modelling has exposed the over-reliance on models in operational contexts. That evidence illustrates the tokenistic use of models in substitution for genuine close observations or intelligent thought. Based on this evidence, we must allow for the fact that some modelling is not only failing to be accurate, but it is contributing to a cultural laziness that results in breaches of environmental protection requirements.

The plight of the Leadbeaters Possums in Victoria is a specific example of this. Justice Mortimer in the Federal Court (at paragraph 233) was able to say, in relation to the brow-beaten and chastised Professor:

Ultimately, Professor Baker appeared to admit that his modelling could not be used to predict if Leadbeater’s Possums were actually present in certain habitat. That seems to me to be a fatal flaw, and to expose the limits of modelling as a method of protecting and conserving important habitat, as opposed to surveys which are likely to detect where the habitat is which is in fact being used by the species.

Friends of Leadbeater’s Possum Inc v VicForests (No 4) [2020] FCA 704 at [233]

There are other examples, from other States, where modelling has been used in similar fashion by logging companies to model habitats for tokenistic retention of trees in forestry operations. In the NSW Legislative Council June 2020 report (Koala populations and habitats), it referred to a similar reliance on technicality and token retention of trees (para 2.73):

That similar behaviour has been seen in two different States illustrates the extent to which these companies model their ‘best practice’ after each other, or at least resort to the same tools. It would be interesting to know exactly how each internal group justified its decisions, and whether it was arrived at independently or not.

These problems are deep, in that they show a culture that does not even value direct observation and data collection, but is also prepared to be lazy and wilfully blind as to whether what they are doing is working. They simply do not care, and no amount of legislative reference to better data or models will overcome that problem. It merely increases the burden on auditing and enforcement.

One of the dangers of collecting information is that the kind of information you collect is dependent upon how you frame the problem. For example, if you only collect data about where you actually see species, and use the absence of species to justify taking out (removing/cutting down) habitat, then you end up with a feedback loop. The increasing degradation of environments and encroachment of human activity pushes the species into other places. It’s the classic ‘Greedy Clearing Algorithm‘.

The national focus of the legislation and review does, of course, influence the kind of recommendations for reform. There will obviously be some attraction in permitting information gathering to allow some people to pat themselves on the back for doing a good job, according to whatever formula for ‘ecologically sustainable development’ we come up with. We’ll be tracking and counting with some new-fangled environmental-development ‘ticker’ , that monitors all our key environmental performance indicators. For example, there is this recommendation:

National environmental economic accounts will be a useful tool for tracking Australia’s progress to achieve ecologically sustainable development (ESD). Efforts to finalise the development of these accounts should be accelerated, so they can be a core input to SoE reporting.


You can see how these reports are going to wind their way into government reporting and electioneering – already we have acronyms like ‘SoE’ for ‘State of the Environment’ because it’s been more important to have a catchy phrase in a report than to establish persistent community values that will safeguard the environment.

The regulators are also going to take whatever opportunity they can to improve their own resources for this modern, technology-driven bureaucratic approach:

with a full suite of modern regulatory monitoring, compliance, enforcement and assurance tools and adequate funding.


Postscript (24 November 2020): Information Gathering and The Koala Census.

The ‘Koala Census’ ( reflects the pursuit of an information-driven approach that was foreshadowed in the government’s National Environmental Standards.

Information-led responses to ongoing environmental problems may fall into three different kinds.

  • The first involves a credible and open attempt to supplement existing knowledge, and inclusively work with the people who have previously put time and effort into solving the problem, but have lacked resources. This is the best approach, and works better when policy and information-gathering are being developed in parallel.
  • The second is one that reflects an overly-enthusiastic but naive proponent leading the new approach, who insists on gathering new information because it gives them ownership of the ‘newness’ of the solution. There is often a policy vacuum, at least for those leading the initiative. This may seem benign but it consigns a lot of wisdom and personal involvement to the sidelines. It wastes time and resources and sends the wrong message. It risks the decision-maker being someone who, in ignorance, thinks the last 6 months of narrow quantitive information is more important than decades of qualitative understanding, and knowledge of interactive systems.
  • The last is one where there is an authoritarian mindset, in which the information-gathering reflects a pre-meditated attempt to ultimately control and exert power for non-environmental interests. The focus on information is part of a strategy that will use information to blindside the public as to the lack of real intent, and relegate people who have genuine concerns to the sidelines.

The last of these approaches is the most cynical, and also the one least likely to deal with the systemic and cultural changes that need to be made to solve the problem. Information gathering per se does not solve problems with dynamic interaction and conflicts. The mere ‘counting’ of anything does not provide that wisdom.

The information driven approach involved in the Koala Count should evaluated to see which of these different situations it represents. New information gathering when the problem’s true causes, qualitatively are already understood (habitat loss, encroachment and land-use conflicts, revegetation, predators, fire), might be seen to be dismissive and one that reflects the cynical approach. Whether it is or isn’t depends on how well policy and the new information is integrated.

The set of factors that affect political solutions to environmental problems includes who is using the information, what insight into the problem they possess, and what power are they willing to exercise to achieve the intended benefits, even if unpopular.

Yes, ‘Koala counts’ have been around in more than one State –


National Environmental Standards – Part 2 – Offsets

Let’s talk about offsets.

On the bright side, the interim report does contain some wisdom, albeit in the form of some belated observations about the bleeding obvious. The Reviewers interim conclusion confirms what we already know, namely:

Offsets do not currently offset the impact of development. Proponents are allowed to clear or otherwise impact habitat by purchasing and improving other land with the same habitat and protecting it from future development


This is, unfortunately, stated in such general terms that the true subject matter is difficult to discern. It appears to be saying, for example, that hacking down a forest and putting some of the animals in another forest is, in fact, a net reduction in the extent of natural ecosystems; a reduction in our national heritage. Had this been written out as a formula for the scientists to perceive, more clearly, and so be in a position to advise the politicians and the public, more clearly, then we would have been denied this opportunity for self-serving revelation.

So what should we do about these crazy offset schemes now? Pack them up?. No, there is definitely some reluctance to do so. It seems we’re being encouraged to know that finding somewhere else to put animals, or trying to regrow a forest in a week, should be a last resort:

requiring offsets to be considered only when options to avoid and then mitigate impacts have been actively considered, and demonstrably exhausted


The trouble is, that’s pretty much already the case. As soon as a project is going to decimate a forest, and wipe out half the remaining population of endangered animals then the project proponents say to their consultants – ‘well, we’ve run out of options that we like, so please consider offsets’.

So far, these recommendations do not reveal much of a change. The Reviewers add another idea that steps the reality checking up a notch by saying that we have to stop using pretend offsets and start using real ones:

requiring offsets, where they are applied, to deliver protection and restoration that genuinely offsets the impacts of the development, avoiding a net loss of habitat


Should I be impressed? No. This is not really a legislative reform. This is just an admission of a failure, dressed up as a reform proposal. The real sentiment is hidden at the bottom, where the Reviewers reveal their hand and the old ‘let the market fix it’ trick is rolled out yet again:

incentivising investment in restoration, by requiring decision-makers to accept robust restoration offsets, and create the market mechanisms to underpin the supply of restoration offsets.


The first part of this recommendation is a direction to decision-makers to ‘accept robust restoration offsets’. Were they not robust before? Were they only partially accepted? And is the idea of a ‘restoration offset’ an offset of the proposal to restore something, so that it gets watered down? Such vague language for such an important subject.

Moving on to the market ‘solution’: now we’re talking about the supply of these ‘restoration offsets’. Where do they come from? Why is it so hard to find them? Are these offsets new and improved forests that we can deliver to a site, and thereby enhance the environment? And how will the miraculous mechanism of a market supply chain suddenly deliver them?

Will a sea-container full of fully grown eucalypts suddenly appear, ready to be planted in neat lines using a technologically-advanced fence-pole driving mechanism? And if tree hollows are needed, then you better get started a 100 years or more in advance. A whole new set of technologies may need to emerge that can grow mature trees in just a matter of weeks, for just-in-time delivery.

National Environmental Standards – Part 3 – Farming

Any reform that focusses on the possibility of ecosystem and biodiversity improvement in regional Australia has to accept that the primary agents of environmental degradation, in rural areas, may be the farmers themselves. And complicating that matter is that the discussion is already concerned with a heavily degraded environment in which many species are already threatened or in low numbers. For example, one of the Review’s experts helped write a report in 2018 that said, in part:

More specifically, farmers prefer to deal with locally based (State/industry) advisers but they also are generally not aware of farmer’s obligations under the EPBC Act (and their interactions with State based obligations). The information available on obligations for agricultural development under the EPBC Act is difficult to find and follow. Environmental impact assessment processes and listing processes for MNES are widely viewed by farmers as unpredictable, unclear, complicated, costly, time consuming and impractical for example, determining if an activity is a continuing use or may have a significant impact, determining if a threatened species or community is present, implementing a detailed survey process for relatively obscure species, implementing a spot spraying program for invasive weeds over large areas or dealing with a planted non native crop invaded by a mobile species which is a MNES.


Translated, it means that farmers are largely not interested or just don’t have the time or inclination to worry about all this ‘environmental stuff’. Not only that, it’s also critical of the fact that farmers can’t get the idea from reading the EPBC Act. Apparently, it is difficult to ‘find and follow’. For most people, the idea is drummed into them that ignorance of the law is not really something you can use as an excuse.

It’s in relation to native species and the land itself that the indigenous people have always had a head start on the Europeans. They avoided the massive energy and often irreversible change associated with modern destructive agriculture. To achieve this requires working with the environment, and utilising natural energy, and natural regrowth. It means respecting and understanding life.

Indigenous knowledge and western science should be considered on an equal footing in the provision of formal advice to the Environment Minister. The proposed Science and Information Committee should be responsible for ensuring advice incorporates the culturally appropriate use of Indigenous knowledge.

None of these statements really highlight the difficulty of inclusivity until you place them in context. In what way does indigenous knowledge interact with farming properties and communities? Is it compatible and if so how? Why don’t we already know this? If we still claim, in any case, to be prioritising the activity of farming, then the power dynamic is already tilted in favour of history.

Consultation is the cure-all ‘solution’, but it often illustrates how most ideas are only at the starting point of being discussed person to person. The starting point seems to be to answer the question of how to motivate people to do anything except maximise economic outcomes, at the expense of the environment. From the outset, the ‘power’ in the discussion is the landholding owners, and environmental protection is, historically, a belated concern. The notion that the altering the topsoil, natural restoration of carbon-based life, and the passage of water will have long-term and not merely short-term consequences (or none at all) seems to have taken a long time to catch on.

We can predict where this call for consultation will occur by knowing that the Review’s Panel Members have already led other reviews on similar issues. Wendy Craik, for example, has dealt with the particular problems associated with farming and agriculture. The 2018 report concluded, in general, that farmers don’t really understand the EPBC Act, find its processes complicated, and need the Department of Environment to act more quickly. The clear impression gained from the Recommendations is that farmers really aren’t terribly self-motivated to do this on their own and need help. However, it’s also important to note the purpose of the Review actually pre-empted some of these concerns:

The Review focused on options for reducing the burden of the regulatory obligations created by the EPBC Act on farmers without reducing environmental standards.

It was also clearly contemplated that this review would flow on into the subsequent 2019 EPBC Act review:

While the Terms of Reference for the Review restrict its scope to consideration of the interaction between the EPBC Act and the agriculture sector, a number of the Review’s recommendations will, once implemented, deliver benefits for other sectors. Some recommendations tailored to the agriculture sector might be adapted to apply to other sectors. These could be considered in the statutory review of the EPBC Act to be conducted in 2019.

We shouldn’t forget that the current paradigm for ‘ecologically sustainable development’ is perceived and framed as trying to enhance environmental status whilst also pursing, as the main goal, ‘sustainable [industrial or agricultural] development’. This is not the only interpretation of ESD, but it does seem to accord with the specific action points raised and implemented. Whilst some of the environmental principles that are espoused as goals seem broadly sound, it is always possible to reverse the priority of environmental concerns in the details – by moving from compulsion and the ‘stick’, to ultimately, the ‘carrot’. But there seems to be, even in the twenty-first century, a complete lack of confidence that this is even possible (let alone probable). As Craik’s 2018 letter to the Environment Minister at the start of the report said:

To provide a “carrot” to balance the “stick” approach, there appear to be no strategic approaches with appropriate incentives to enable the agriculture sector to grow and develop (as often encouraged by government policy) while maintaining national environmental standards.

Significantly, and a feature of many ‘review’ processes, there is a call for ‘innovation’, as if an appeal to some kind of magical creativity is needed to actually achieve the goals. When the underlying goal is to increase ‘economic growth’, then it seems like a real struggle to find compatible behaviour modification that might produce environmental benefits. Although many try and find psychological motivation (often through simple financial or market-based behavioural factors), cultural change seems slow, and uninspired. Solutions and calls for consultation are often set in such a way that the solutions are in the hands of those who currently have a dominant position over the state of the environment:

Recommendation 2

It is recommended that collaboration between agriculture sector experts and environment and biodiversity experts be encouraged, to identify innovative practices and activities and areas of prospective agricultural growth over the next ten years.

And so the open-ended failure to design a better solution continues.

These reviews recommend that people try and figure it out together. Is this always an inevitable consequence of this type of activity? Will solutions have to be developed on the ground, by self-motivated people? And if they are, will legislators ever take notice, or have the wisdom to know how to encode those as templates for desirable behaviours?

Background to EPBC Act Review

The Discussion Paper

The direct link to the Review’s Discussion Paper is here.

The review of the Environmental Protection and Biodiversity Act (Cth) is supposed to occur at least once every ten years. This is taken literally, since the last review was done in 2009.

A slight shift is the move for more universal standards, that cross State boundaries. This restructuring is not unique to environmental law: it has already occurred in relation to the Corporations Law.

So why bother with a review? The aim is to improve the outcomes from the publication of the Act. If we are serious about measuring this effectiveness, then the legislative reform needs to be tested to see if it:

  • promotes genuine attention to caring about the state of the environment before, during and after projects; and
  • provides simpler mechanisms for obtaining information on the state of attention to these things, preferably without the need for litigation.
  • consequences (immediate, ongoing) for projects that fail to show genuine commitment to the conditions that led to the project being approved in the first place.

The Discussion Paper seeks to invite comment by a serious of questions, which are really a set of leading questions, that guide readers into thinking that some of the issues are new, when they really are not. By repeating the same questions for each review, asking if markets and self-regulation might work, there is an implicit message that we will not learn from the past, at least not quickly.

Take Question 21 for example :

The limited capacity of government resources to directly manage Australia’s environment may constrain the achievement of environmental outcomes. Greater use of ecosystem services markets could make it easier for business to meet their obligations by investing in environmental outcomes. There is also an opportunity to take advantage of the greater focus on corporate social responsibility to increase private sector interest in improving the environment.

As noted earlier, where the interests of the regulated community aligns with the regulatory outcome, there may also be advantage in leveraging mature industries’ ability to self-regulate, with the Commonwealth retaining oversight. These arrangements can be more adaptive in a rapidly changing world and have greater support than traditional regulation, especially if there is connected and coordinated investment in what matters most with transparency of obligations supported by quality assurance arrangements.

Finally, the provision of greater information and education can change the behaviour of consumers and business, such as through labelling and other information products.

Is this so new or insightful? Ideas like self-regulation have, in the past, not achieved significant success where there is a conflict between the priorities or values of those self-regulating, and the necessary prioritisation of environmental protections. Is there anything to suggest we now know and acknowledge these? It is left to those providing submissions to answer these questions.

Similarly, Question 23, dealing with the expansion of offset markets. Is this a prejudged conclusion? This explanatory text (in the December 2019 Discussion Paper) seems to illustrate the assumptions that offsets are working:

A greater focus on developing efficient ecosystem services markets may lower costs and support greater investment in the environment. There may also be opportunity to improve the environmental outcomes that the current biodiversity offsets system delivers and the systems that maintain the integrity of offsets.

Here again are the old solutions, particularly market solutions, which have tended to encourage the idea that the environment is tradable commodity, or one that is capable of being ‘produced’ if there is enough incentive to do so.

A failure to implement offsets in a timely way (in many cases, they are needed before project commencement) merely encourages token applications to be made for offsets as conditions of an environmental approval. Offsets themselves tend to reframe natural resources as economic consumables. The EDO noted, in its April 2020 submission for the review of the EPBC Act:

The success or otherwise of environmental markets is highly dependent on whether the market settings adequately reflect the limited nature of natural resources and properly price the costs of environmental harm, including those costs that traditional economic models consider to be ‘externalities’.

URL: (at page 100)

Part of the problem with the Act, which is an issue for any legislative reform, is whether it can take bold steps to try and increase the priority of environmental actions in the day to day lives of the community, and particularly those in the agricultural or resource areas, like farmers, loggers, coal-miners, truck drivers or fruit growers. As the EDO said in its April 2020 submission:

“To restore trust and achieve improved outcomes, community engagement must be at the centre of the Act.”

URL: (at page 94)

Software developers – is going alone a good idea?

So you’ve decided to go out on your own and start a business as a software developer. Is that a good thing? Do you have sufficient experience to know if this is a good idea, or are you going to see how it goes for a while.

There’s a lot to learn, and this note might help you to think about some of the issues that are in front of you.

Issues for sole software developers and their code

Is your business model dependent on you retaining any of your own code (whether for tools or creating external code on servers etc?). e.g. Have you developed an ‘engine’ to enable you to efficiently deliver your services, which is therefore a core asset of your business?

Who owns the code, and is some or any of it going to be available to you for future work?

Is the code owned by you, the client or is it going to be open-sourced with a license? These are important business and risk questions. If the client can modify the code, who is responsible for the whole system? Be very specific about what your product is.

Whenever you discuss code, think about your intellectual property, and what you have invested in your ideas. Does your code give you a market advantage and if so, for how long? Do you rely on your ability to keep ahead of the market or on a specific advantage from specific code? Do you just sell code to clients as they require it, and so your business relies mainly on your personal coding skill?

If you are not yet in a business arrangement, make sure you think about whether you need to protect your intellectual property (if it is already valuable), or if you are prepared to discuss your information with someone under the protection of a confidentiality agreement.

Remember it is hard to stop the flow of ideas, so make a decision about what you want to share and what you don’t want to.

Research and innovation – special considerations for sole coders

If you are involved in innovation or start-ups, you may have protected environments or ‘experiments’ where you try something out, knowing that it is not intended to have the same risk or legal liability as the main product.

Have your developed a unique software solution that is essential to your business and is capable of registration or needs careful non-disclosure?

How soon do you need to protect your research, if you are offering a specific product to the public or world at large? Are there designs or copyright?

If clients are involved in giving you feedback on a new product, make sure that you think about how much you need to protect your commercially-sensitive ideas, when you obtain their feedback. If this is before you MVP (minimum viable product), make sure you think about protecting the market-valuable information. You may want to think about what you are prepare to disclose in your ‘pitch’ before you enter into a contract.

What is your business and research worth to you? How would your record the value of your R&D time, and your product, before there is a defined product? Are there any financial or tax incentives for recording R&D time as an asset? Seek financial advice on those possibilities.

If you enjoy working with ‘open-source’ projects, then make sure you still have an appropriate open-source licence for your code.

Specific risk assessments for individual contracts

You will need to fine tune your financial model and risk assessments for individual contracts. This will depend on who the client is, and what they want.

In what environment is your code etc going to be working? Will you promise (i.e. offer a warranty) that it will work on a particular hardware, platform etc? Or maybe you won’t. Maybe you want the client to test it, sign off and then they have full risk?

How significant is the code to the client’s business? It is essential you have an understanding of how critical this is. If you pay attention to this, it will help you weigh up whether the quality and continuity of the code (including maintenance) required is balanced by your remuneration for the work. Is it possible to pass over the risk for the contract working after the end date of the contract, or within a period in which ‘completion’ is assessed?

A related question is what objective standards does the code have to meet even before you get paid? Remember that unit tests or performance tests are only one way of getting sign off. The best way for you and client to say ‘this is done, and we are happy’ is up to you. It depends on the nature of the job. Above all, get those specs in writing.

Starting a small software business

Starting a small business in the software industry, perhaps as a software developer, involves many considerations like any other business. You will need to think about whether you want to be a sole trader, or become incorporated. You will need to think about what tools and hardware, licences, hosting, cloud services, and working spaces you need, as well as things like branding and marketing. Accountants and business advisers will help you with these decisions.

You can get some general advice from a business advisory service. In Perth this includes the Small Business Development Corporation – check the web, they have good advice on general issues, the need for legal contracts. Phone them up and see if they can put you in touch with people in software too. I’d also recommend the Australian Computing Society, if you are not already a member. They have good resources.

Business planning

Think about what your business is. Are you a lone coder offering software? Are you part of a group of people who combine to offer a complex business product?

What is your market? How well do you know it? Who will you attempt to win work from, and what will they pay for the work, service or product? Can you win work by putting up examples of your work, your portfolio or using reputations and rankings on coders sites? Are you trying to win work from the business market or as a contractor in the more computer-savvy coding or scientific world?

Do you write code and keep it as an asset (e.g. licence it to clients) or sell the code once-off and then have nothing further to do with it? Or do you promise to service code and keep it working?

Some software developers may wish to become affiliated developers, using the resources of companies like Apple and Google to sell stand-alone apps to a national or international market. Is that what you want to do? Or are you working on more complex problems, suited to each individual client?

How much thought have you given to the right service in the right market, as alluded to above?

Business size

There’s no reason to lock yourself into the idea your only possible business model is as a coder for hire. It may not be the best work or business environment for you. You may actually want to work with other people, and it may be that you find that you need to be in some kind of multi-skilled business that can offer a broad suite of services to clients, with specialists all contributing to the final business product.

If you are a sole trader, some of the social aspects of working in a larger business will have to be met in other ways. how will you maintain industry connections and remain abreast of what’s going on? How will you be able to maintain a competitive advantage? How long might you be working before you need to find people with other skills to help you win or retain work?

Once you have more than just a sole founder in your proposed business, then questions about who the founders are, what other staff are needed and what they will provide to the business will all become relevant questions.

What’s your financial model?

Think about payment terms for individual contracts. Perhaps you have individual software development contracts that become a source of revenue in themselves, contributing to the revenue of the business as a whole.

Let’s assume you are negotiating individual, person to person contracts. Do you want to get paid up-front, or in instalments or just at completion? What risks are there to you? And what might you promise as you go? What are you getting paid for and what is the relationship between the value of your work and when the risk passes to the customer etc? i.e. if you get paid $100, then you might not be taking much responsibility, adn the client just has to make good with it. If they’re paying you $10,000 they might expect it to work really well etc.

Risk assessment

As often as you can, carry out risk identification and planning in your business. Risk planning can be win-win, if you discuss the risk of product/services failure and help both the client and yourself prepare for that possibility.

Specific risk assessments for individual contracts

You will need to fine tune your financial model and risk assessments for individual contracts. This will depend on who the client is, and what they want.

In what environment is your code etc going to be working? Will you promise (i.e. offer a warranty) that it will work on a particular hardware, platform etc? Or maybe you won’t. Maybe you want the client to test it, sign off and then they have full risk?

A related question is what objective standards does the code have to meet? Remember that unit tests or performance tests are only one way of getting sign off. The best way for you and client to say ‘this is done, and we are happy’ is up to you. It depends on the nature of the job. Above all, get those specs in writing.

Research and innovation – special considerations

If you are involved in innovation or start-ups, you may have protected environments or ‘experiments’ where you try something out, knowing that it is not intended to have the same risk or legal liability as the main product. If clients are involved in this, make sure that you protect your ideas, when you obtain their feedback. If this is before you MVP (minimum viable product), make sure you think about protecting the market-valuable information. You may want to think about what you are prepare to disclose in your ‘pitch’ before you enter into a contract.

What is your business and research worth to you? How would your record the value of your R&D time, and your product, before there is a defined product?

How soon do you need to protect your research, if you are offering a specific product to the public or world at large? Are there designs or copyright?

Have your developed a unique software solution that is essential to your business and is capable of registration or needs careful non-disclosure?

If you enjoy working with ‘open-source’ projects, then make sure you still have an appropriate open-source licence for your code.

Issues for sole software developers and their code

Is your business model dependent on you retaining any of your own code (whether for tools or creating external code on servers etc?). e.g. Have you developed an ‘engine’ to enable you to efficiently deliver your services, which is therefore a core asset of your business?

Who owns the code, and is some or any of it going to be available to you for future work?

Is the code owned by you, the client or is it going to be open-sourced with a license? These are important business and risk questions. If the client can modify the code, who is responsible for the whole system? Be very specific about what your product is.

Whenever you discuss code, think about your intellectual property, and what you have invested in your ideas. Does your code give you a market advantage and if so, for how long? Do you rely on your ability to keep ahead of the market or on a specific advantage from specific code? Do you just sell code to clients as they require it, and so your business relies mainly on your personal coding skill?

If you are not yet in a business arrangement, make sure you think about whether you need to protect your intellectual property (if it is already valuable), or if you are prepared to discuss your information with someone under the protection of a confidentiality agreement.

Remember it is hard to stop the flow of ideas, so make a decision about what you want to share and what you don’t want to.

Contract negotiations

Begin to understand your business and risk identification by thinking about entering into a well written legal contract.

See here :

There are two essential things to think about during contract negotiations:

  1. Why you need a legal contract at all
  2. Obtaining proper legal advice to assist you, when you need it.

The negotiation of the contract as an important stage. If you get that right, it will pay off for you later. In large projects, over a long period of time, with large financial returns and risks, it is essential you have a good contract before you start. Even if you have lots of small contracts that are essential to your business, it might be a good idea to have a lawyer draw up a set of terms that you will repeatedly use, or vary slightly.

Legal Contracts

It is often better to prepare a full set of contract terms rather than just a few sentences on an invoice. The greater your financial risk, the more important it is for your contract to cover the major risks and, in some areas, to be detailed enough there won’t be confusion or ambiguity if a problem occurs. Make sure this is all in writing, and signed so that there is proof of agreement by all the parties.

Understand what a legal contract is and why you might want one. Some of the reasons include:

  • It sets standards for behaviour and communication that are essential to working cooperatively with other people.
  • It ensures that people cannot go back on what they have said when it suits them.
  • It provides an objective record of what was agreed, when and by whom.
  • It is a basis for legal claims that require contracts in writing.
  • It helps you plan for, and anticipate, your operational, financial, business and reputational risks (at least these – there may be more). The more you can do so before you start any work, the better.
  • It helps protect you in case of a future breakdown in personal or business relationships.
  • It can help define the state of work before the contract period commences, and if there is intellectual property, it should say who owned it before, during and after the contract.

Legal Advice

If you can afford it, obtain some basic legal advice as to what you can use as a contract to sell your services e.g. a standard contract for development of specific software, that you can use as a template for all you clients. This is well worth thinking about if you have lots of similar work, or you think your product offering is similar. The professional skills and knowledge that lawyers bring to your business include the ability to identify the kinds of issues I’ve described above, and to prepare and document some proposals you can practically use.

Whether you need to pay for legal advice, and the kind of legal advice you want, depends on things like:

  • How important your work and risk planning is to you – financially, reputationally.
  • How important each individual contract is to your business. Do you do lots of small work or large, project-based work?
  • How much business experience do you and your clients have?
  • Is your client very experienced? Do they already have standard terms that they might try and ask you to accept?
  • Are there standard contracts that are accepted in your industry, that can be the basis for the legal advice you obtain?

A lawyer will help you understand other legislation and laws that relate to your industry and type of contract, and know how they modify or supplement your contract terms.

The Precautionary Principle in Australian law

The Precautionary Principle is an evolving concept: first arising in German law, migrating to international environmental declarations, and then becoming part of national legislative interventions for the purposes of environmental protection.

Australia enthusiastically adopted the term ‘precautionary principle’ in legislation in the 1990’s, but it missed a significant point – it’s a guideline in aid of a more general duty. That needs to be the focus of both legislative interpretation of the present law, and also future law reform.

The precautionary principle in an international context

The descriptive phrase ‘precautionary principle, as used now, is not the name originally given to the concept. Some reference to the concept of the precautionary principle was used in international treaties in the 1980’s and 1990’s, and gradually refined over that period. It was referred to then as a ‘precautionary approach’ to measures intended to prevent environmental degradation.

In December 2015 (translated February 2016), European Parliamentary Researcher Didier Bourguignon prepared an in-depth analysis of the precautionary principle for the European Union in a document titled The precautionary principle: Definitions, applications and governance[1]. He referred to the origins of the precautionary principle in a related concept (the foresight principle) in 1970’s German Law. He stated that the first reference to precautionary principles, approaches or measures was in the 1985 Vienna Convention for the protection of the Ozone Layer[2].

Goldstein [3] noted that the Precautionary Principle was first stated in the 1989 Rio Declaration [4]:

Nations shall use the precautionary approach to protect the environment. Where there are threats of serious or irreversible damage, scientific uncertainty shall not be used to postpone cost-effective measures to prevent environmental degradation.

In the 1992 Rio Declaration on Environment and Development, it was contained within Principle 15:

In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.

Significantly, however, Principle 15 was listed after other preliminary principles that emphasised States’ ‘right to development’. Also, the more general topic of ‘environmental degradation’ was addressed in the earlier principles, including, for example, Principle 14:

States should effectively cooperate to discourage or prevent the relocation and transfer to other States of any activities and substances that cause severe environmental degradation or are found to be harmful to human health.

In that principle, the primary concern was ‘any activities …that cause severe environmental degradation’. The prevention of environmental degradation was paramount, as it really is in Principle 15, where the relevant phrase indicating the ultimate purpose is ‘measures to prevent environmental degradation’. Similarly, in Principle 23 the ultimate objective is protection of the environment:

The environment and natural resources of people under oppression, domination and occupation shall be protected.

By the early 2000’s the concept was being referred to as the ‘precautionary principle’. A 2001 paper by Kriegel that examined that question said:

The term precautionary principle [as distinct from the term ‘prudent avoidance’] has the advantage that it provides an overarching framework that links environmental sciences and public health.[6]

Kriegel also noted:

The precautionary principle, by calling for preventive action even when there is uncertainty, by placing the onus on those who create the hazard, and by emphasizing alternatives and democracy, is viewed by environmentalists as a way to shift the terms of the debate and stimulate change.

The context for the principle

Before the need for the precautionary principle even arises, there must exist some consensus or agreement (including that found in international treaties since the 1980’s) that it is necessary to exercise some constraint on the normative behaviours that would otherwise degrade the environment. It is a guideline premised on that more general purpose.

The precautionary principle is not the logical starting point for reasoning about duties of environmental protection.

We have three elements:

  • The general duty to protect the environment and/or avoid economic degradation (whether conditionally or unconditionally);
  • The easy cases, where there is relative scientific certainty regarding what the harm is, and how to avoid it (obvious ‘foreseeability’ and higher probability); and
  • The hard cases, where there is a degree of scientific uncertainty regarding what the harm is, and/or how to avoid it (less certainty).

If we accept that the precautionary principle is intended to provide guidance in ‘hard cases’, and with respect to a general duty to prevent environmental degradation, then we can more easily appreciate that there are most likely other easier cases that are equally captured within the same general circumstances. Where there is, for example, even more scientific certainty, there should be an even stronger argument that there exists the same general duty to try and implement cost-effective measures to avert environmental degradation.

The utility of the pr. principle is directed to ‘hard cases’

Attempts to define the precautionary principle in terms of over-arching principles suffer from the problem of giving it a more general scope than intended. Attempts to elevate it to the sole foundational principle for the prevention of environmental degradation are likely to lead into more confusion and error.

When international treaties invoke the precautionary principle, it is intended to provide guidance as to law-making policies for member states. It emphasises the values that need to be held highest – that environment degradation is a sufficiently serious topic that it requires action rather than inaction. Significantly, its focus is on these ‘hard cases’ – the ones involving scientific uncertainty.

Legislative ratification of the precautionary principle requires some appreciation of this context, in order to provide a comprehensive, rational system for using it. You cannot intelligently legislate using the ‘precautionary principle’ without also having endorsed a more general duty to prevent environmental degradation.

Australian legislative responses

In Australian law, references to the need to preserve biological diversity and ecological integrity sometimes occur in the same context as the precautionary principle. These considerations ought to be more closely related to the more general duty to prevent environmental degradation than the precautionary principle itself. That they may sometimes be included with it suggests that the anticipation of environmental harm, in the future, and taking reasonably necessary steps to prevent it, is the unifying principle or duty for all these elements.

The language in the Rio Declaration Principles influenced the Australian National Strategy for Ecologically Sustainable Development [5]. This strategy preceded the Environmental Protection and Biodiversity Conservation Act 1999 (Cth) (EPBC Act) by a few years. In section 3A of the EPBC Act some of the Rio Declaration Principles (the ‘Rio Subset‘) are reproduced in summary form:

3A Principles of ecologically sustainable development

The following principles are principles of ecologically sustainable development:

(a)  decision‑making processes should effectively integrate both long‑term and short‑term economic, environmental, social and equitable considerations;

(b)  if there are threats of serious or irreversible environmental damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation;

(c)  the principle of inter‑generational equity—that the present generation should ensure that the health, diversity and productivity of the environment is maintained or enhanced for the benefit of future generations;

(d)  the conservation of biological diversity and ecological integrity should be a fundamental consideration in decision‑making;

(e)  improved valuation, pricing and incentive mechanisms should be promoted.

A more prepared legislature would have considered all of the necessary foundational concepts that needed to be expressly incorporated into the legislation. As part of this, it would have placed the precautionary principle in the service of the more general duty.

As a general matter, a list of ‘principles’ agreed for the purposes of an international treaty may require a more logically structured representation in local legislation in order to ensure better comprehension and clarity.

The need to expose the implicit general protective duty in State-based legislation

A more general duty of environmental protection (and/or to prevent environmental degradation) is implicit in the use of the precautionary principle in most of The Australian statutes that refer to it. As a matter of general interpretation, this general duty and purpose should be accepted in each legislative document in which the precautionary principle is applied to environmental matters (whether that be a statute, code, plan etc).

This should be a matter capable of legislative re-interpretation in light of the changes since 1992, and the adoption of the National Strategy, and in subsequent legislative instruments embodying environmental protection laws. This includes the Environmental Protection and Biodiversity Conservation Act 1999 (Cth).

The same scheme as section 3A of the EPBC Act is reproduced in section 4 of the Environmental Protection Act 1986 (WA), where it states that the object of the Act is ‘to protect the environment of the State, having regard to the following principles’, and then reproduces the same few principles from the Rio Subset as the EPBC Act. However, it leaves out the first principle (long and short term goals) that appears in the EPBC Act, and includes another principle at the end (waste minimisation).

The re-evaluation of normative values requires a comprehensive review of the extent to which legislative instruments expressly avert to the general duty to protect the environment, and whether there is a need for a logical structure that is more useful than the checklist approach adopted by the drafters of the legislation. In my view, we need to identify that this set of principles does not convey the whole of the matters described there (including the context of the precautionary principle and, in turn, the relevance of the distinction between matters with scientific certainty, and scientific uncertainty). This task remains incomplete.

The role of progressive test cases cannot be underestimated. Since explicit amendments to legislation to clarify the situation have not been made, it may require specific cases to undertaken statutory interpretation exercises until this reaches a level of consensus in the common law.

Subsidiary legislative documents

Where a statute also contemplates further and subsidiary legislative documents be prepared (like management plans, codes, development plans, conservation plans etc), then in each case, it should require the same general values to be consistently be applied all the way down the legislative chain.

This recognition that there is a unifying general duty involved should make it easier to flexibly design subsidiary legislative documents. There should be some balanced and complementary attention given to the ‘hard cases’ and the ‘easy cases’, and the precautionary principle is only required for the ‘hard cases’. There does not have to be any ongoing difficulty in finding ways to express the precautionary principle at different levels of generality or different levels of decision-making. If it is not needed, then there is still a positive general duty to take action to prevent environmental degradation for the remaining ‘easy cases’.

Project-level legislative documents

In relation to clearing, legislative regulation of environmental protection has been pushed down, into the contents of management plans and development project action plans that are intended to bind project proponents. In Victoria, Regional Forestry Agreements and ‘Codes of Conduct’ are the practical records of what is largely a pattern of self-regulation by those that have the means to degrade the environment as part of ‘business as usual’.

The preparation of management plans for conservation is part of the legislative process, and development approval processes. This activity is still subject to judicial review, because it is a requirement of other, more general duties.

Legal review of these plans has often been focussed on the compliance with the ‘precautionary principle’ because this is an explicit phrase found in the legislative documents, like the Code. However, reliance on this phrase tends to narrow the focus to a subset of the circumstances to the ‘hard cases’ too early. This has led to confusion (or legal arguments) about whether uncertainty, as well as a serious threat is a precondition to the precautionary principle. Yet the precautionary principle is most relevant in that situation. This is merely to remind us that it applies to a subset of circumstances in which the prevention of environmental degradation is to be considered.

The Possums Case

In the Possums Case[9], the parties were in dispute about the requirements of a Code of Conduct, and forestry logging plan. These plans were intended, in part, to prevent invalid environmental degradation (the loss of endangered species). In that context, the parties were also in dispute about whether the ‘precautionary principle’ applied to give rise to a duty that was breached by the relevant conduct.

The Federal Court (Mortimer J) found that the forestry and conservation plans themselves were not actually working to protect endangered species on the ground, nor did it seem they had been designed with any track record of being effective. He found that VicForests was in breach of the Code of Conduct, and was likely to continue to be in the future.

It is likely that this conduct would have failed the more general duty to prevent environmental degradation, but the case was not dealt with that way.

The case focussed on the application of the precautionary principle. This was because the precautionary principle demanded action in that case, rather than postponement. It was, in effect, treated as one of the ‘hard cases’, and one in which there was sufficient uncertainty (about how to prevent the harm, which was serious), that postponement of action could not be justified.

The case illustrates that even in a case with scientific uncertainty, there is a need to both plan and act; planning alone will not be enough, and poor planning will need to be reviewed and revised.

The variable need for the precautionary principle

If we accept the precautionary principle is a guideline for some circumstances to which a more general duty attaches, then of course it will have more scope for application in some cases than others. This does not mean we have to jettison our compliance with the more general duty to avoid environmental degradation.

It is obviously possible that with more knowledge, time and information, cases for which the precautionary principle was once apposite (‘hard cases’) will become ‘easy cases’, and specific guidance from the precautionary principle (in implementation of laws or operational actions) will not be needed. That is because we will simply be able to comply with a more general duty to prevent environmental degradation, in a situation of greater certainty.

We should always be asking if circumstances have proceeded from uncertainty to certainty (from the ‘hard case’ to the ‘easy case’). The general duty to prevent environmental degradation is a continuing one – the fact that the precautionary principle might have applied in the context of a project approval, or decision requiring environmental approval simply means the parties were dealing with a ‘hard case’ at that time. Having done so, there is no reason why they are release from the need to pay attention to the purpose for which action was taken. It is still relevant to ask whether they did or did not succeed in preventing the environmental degradation that was of concern.

Conclusion and summary

It is imperative for environmental legal cases that involve interpretations of the phrase ‘precautionary principle’ that the more general duty to prevent environmental degradation, implicit in the use of that term, is given express recognition, as a matter of legislative interpretation.

The phrase ‘precautionary principle’ should be understood as providing guidance in respect to ‘hard cases’, but that doesn’t mean attention shouldn’t be given to discharging the more general duty to prevent environmental degradation in the ‘easy cases’.

The more general duty (and the application of the precautionary principle to provide guidance in the ‘hard cases’) is a continuing duty to prevent environmental degradation. This is essential to ensure that it is workable, but also to ensure that the precautionary principle can be used as appropriate, based on the available information at any given time.

The precautionary principle may have variable application, even for the same subject matter, because knowledge changes over time. Where the precautionary principle might once have been needed, over time, greater certainty might enable us to clearly state there is an obvious duty to act in a particular way to prevent environmental degradation.


  1. Bourguignon (European Union), The precautionary principle: Definitions, applications and governance (2016), doi:10.2861/821468 URL:
  2. Bourguignon, ibid, paragraph 2.1.
  3. Environ Health Perspect 107:A594–595 (1999). Available at URL: (5 July 2020)
  4. United Nations Conference on Environment and Development. Earth Summit. Rio Declaration on Environment and Development, Rio de Janeiro, Brazil. Publ no. E.73.1.A.14.Stockholm:United Nations,192.
  5. URL:
  6. (2001), Kriebel et al, The Precautionary Principle in Environment Science, Environmental Health Perspectives, 109(9) page 871
  7. URL Accessed 5 July 2020
  8. Raffensperger C, Tickner J, eds. Protecting Public Health and the Environment: Implementing the Precautionary Principle. Washington, DC:Island Press, 1999.
  9. Friends of Leadbeater’s Possum Inc v VicForests (No 4) [2020] FCA 704 (27 May 2020) (“Possums Case”)
  10. Reproduction of the Rio Declaration at the Conference on Biological Diversity website URL: