A desire for increased accuracy, simplicity and completeness is the primary reason why modellers wish to build large hydraulic models:
- To reflect the network as closely as possible
- To incorporate all the available data
- To build a single model of the whole network that is easy to interpret
- To ensure financial and investment decisions based on the model are as accurate as possible
Historically large models have proved impossible to build accurately or operate usefully. This article reviews why this picture is changing, and suggests that the large network model will become the modelling standard of the future.
The quality and usefulness of an individual hydraulic model is driven by the interplay of numerous variables. Chief amongst these is the volume and quality of data that is available, the quality of the modelling software that is being used to build and run the model, and finally the limitations of the IT infrastructure upon which it relies.
For these reasons, hydraulic models have historically been relatively small and have borne only a limited resemblance to the network that the model is trying to represent. High cost combined with limited return has constrained the historic contribution that modelling could make to the management of water resources and infrastructure.
Two of these factors have been within the control or influence of the industry.
The availability and cost of data that is easily, digitally and inexpensively available from asset data records, mapping and observed system data has improved beyond recognition in recent years. Extensive asset/GIS network data and GIS mapping data combined with easier and cost effective data collection is transforming the ability of modellers to build ‘real world’ network models.
Similarly, the availability of high quality, functionally rich, hydraulic modelling software capable of processing and analysing large volumes of information has also improved very considerably. Modern modelling software can handle large data sets, is easier to use, and offers considerably enhanced result interpretation capabilities.
Improvements in these areas are reducing the cost of infrastructure investments by enhancing the accuracy and appropriateness of their design. The danger of designing over-engineered solutions is reducing.
The most significant variable, however, is largely beyond the control of the water industry. Modelling is inevitably constrained by the computer power and capability available to those who wish to build and run these large models. The improvements offered by faster processing power, new easy to use operating systems, large and inexpensive data storage and memory, over the past fifteen years have made the single greatest contribution to the evolution of network modelling as an accurate and reliable tool upon which investment decisions can be made with confidence. Inevitably, IT investments by water authorities and consultants lag slightly behind the latest technology innovations. However it remains true that the evolution of chip technology and the corresponding evolution from Dos and 286 PCs, through to Windows XP, combined with the growth in data storage and memory and its corresponding reduction in cost has had an exciting and stimulating effect.
Historically, in order to work within the constraints of these variables, hydraulic models have traditionally either been strategic in their focus (that is, they included only the main pipes and controls) or they covered a small geographical area in detail. To create a bigger, more comprehensive picture, the model results from these small geographical models are transferred to other similar sized models adjacent to each other.
Linking even a small number of hydraulic models in this way, with adjacent inputs and outputs, can be complex and time-consuming, as it requires the synchronisation of both time and geographical inputs. Extrapolating results from the micro to the macro level can also result in over designed networks. A good example is rainfall distribution that – if applied widely based on micro data – can result in in-built inaccuracy. The potential for error is real.
Such strategic models are generally created and calibrated for project specific issues. Usually limited to a single use scenario, they are difficult to maintain and apply to wider or other applications.
Whist there remains good reasons for smaller, skeletonised or simplified, models (see below), the claim that bigger, more detailed models make for better modelling and improved decision making is incontrovertible. The increasing frequency with which multiple systems are synchronised and integrated demands a higher level of detail to ensure consistency and enable effective decision-making. Even when this slows the model run time, it cannot be disputed that the quality of the result is worthwhile.
The growth in processor speed, larger and cheaper data storage, improvements in software functionality and the availability of improved data quality (whether network, flow, or GIS), have all facilitated the development of larger models and created the prospect of hydraulic models much more closely reflecting the system it represents.
At Wallingford, we have witnessed this growth in model size at first hand. In the UK, models exceeding 40,000 nodes are in use. In the USA, models detailing over 50,000 nodes are being created whilst in Japan, models of over 90,000 nodes have been built, with important decisions being based upon their findings. Several major US cities are making efforts to build models detailing every pipe and node in their systems. In addition to the desire for more accuracy, these models provide a direct link between service, performance of the system and the customers’ location in the system. In addition, regulatory incentives make detailed models a valuable tool for a city attempting to derive the best and most cost-effective solution for solving their network problems.
It follows that the larger the volume of current, complete and accurate data contained within the model, the more consistent, reliable and error-free the results. However, it also follows that the larger the model, the slower the run time, even with the latest processors available, and the fastest hydraulic modelling software
The desire for a more accurate and complete model can conflict with the operational desire for practical run times.
Model run times reflect the type of model being simulated as well as its size and the processor speed of the computer on which it is being run. For example, wastewater models – with their much greater number of physical calculations – take longer to run than water supply models. An acceptable run time for a wastewater model is completely unacceptable to the water supply modeller used to run times of a few minutes...
Unusually for an industry known for its objectivity, acceptable run times are highly subjective. Running multiple simulations overnight may be perfectly acceptable whilst a mid-morning run time that lasts only a few minutes can seem interminably long. A case of a watched pot never boils.
Ironically model building may be getting faster and slower at the same time because of the interplay between model size and faster simulation. Faster simulation encourages the building of larger models (for which the run times might remain static but the model grow significantly in size) or the running of a larger number of scenarios in a similar or shorter timeframe (with the result that decisions can be reached more quickly). For some, the shortest run time imaginable will never be short enough.
Skeletonisation or Simplification
Run time length in part accounts for the continuing interest in skeletonisation amongst the modelling community.
There are two scenarios in which skeletonisation has a role to play.
When data to populate the model is of a poor quality or inconsistently available, or
When the run time for the model is unacceptably slow.
However, skeletonisation is not without its downsides.
First, the process of simplifying an existing system to a strategic model is time-consuming. Second, it requires many assumptions to be made along the way with each one reducing the accuracy of the model and increasing the risk of error.
As an example, consider the removal of manholes from a wastewater system. The storage contained in these removed manholes can be accounted for in various ways. They can simply be ignored, as if they make no difference to the subsequent system at all; or, they can be added to the nearest available up stream or the nearest downstream manhole. Alternatively, they can be divided between both of them, by a derived percentage. In addition, once this decision has been made, a further choice presents itself - whether the storage should be accounted for within the shaft or the chamber of the manhole. Different countries, organisations, consultants and even different projects have there own procedures for which option is appropriate. At best, this is inconsistent and at worst unrepresentative.
Another significant example involves the calculation of population density information, whether for a sanitary system or water distribution system. The ability to assign accurately which areas truly connect to which part of the system, and the accuracy of the subsequent results, have a high potential of error.
Once nodes have been simplified from a model and the results created, if a critical area is identified such as flooding manholes or low pressure areas, are these locations representative of the true geographical location of the failure, or simply the nearest available point in the model to represent it? With a simplified model, this question becomes a challenge to verify without major modification.
Models created with limited detail are consistently difficult to calibrate. This is partly and obviously because they do not truly represent the system they are intended to replicate. Moreover, in most cases the processes and adjustments made to calibrate with observed information usually creates a model that even less represents the system it is trying to recreate, and certainly limits its re-use and general application.
With more and more hydraulic models being built from comprehensive GIS or asset databases the link between an organisation’s core data and any simplified model is immediately broken. This means that not only is it difficult to link information and results gained from the modelling process with the core data set, but that the issues of asset and model maintenance are almost impossible to manage.
Large, more detailed models represent real world networks much more closely. If maintained correctly, this can save time in model build, calibration and the interpretation of results. It also creates a more multi-purpose model that can be integrated with other corporate systems for a diversity of uses and easy maintenance.
Skeletonised models still have a role to play though, particularly where data is lacking or where a shorter run time is crucial (real time in operational modelling, for instance). However, simplified models can result in the generation of poor quality information on which to base decisions. Further, there is always the danger of reading the detail into a model where data is lacking.
A primary driver behind the emergence of large models is the growth in computing power available at the desktop. This remains a core issue to the future of modelling. As the computer power available on the desktop continues to increase, modern modelling systems allow users to capitalise on its potential by building and running, larger and more detailed models. As a result, these users enjoy subsequent improvements in model building efficiency in terms of increased modelling detail and more accurate results.