Models of river channels and river flows have been used to good effect for many years. HR Wallingford Group, in the UK, has been building physical models of rivers for nearly 60 years. Some years ago, it was predicted that physical modeling would soon cease to exist, to be entirely replaced by computer models. This has not happened - in fact, the demand for physical models is greater than ever. But at the same time, as part of river management, the demand for computer models has grown and now outstrips the demand for physical models.
Computer modeling began as an academic exercise undertaken by researchers, and many people still regard it in this way. Yet the power of computer models is now available to river managers for everyday catchment management. Perhaps not everyone has recognised it, but river modeling is no longer just a research activity but an operational tool. There is still a need for further research, but this can run in parallel with the operational use of models, and complement it.
In the past, a single expert or perhaps a small team would derive or choose the relevant flow equations, write the computer program to apply the equations to a particular stretch of river, collect the data to populate and calibrate the model, carry out the simulation runs and analyse the results. Models would be built to answer a single specific question.
The current situation
River modeling has moved on a long way from those days. Currently, there is an expectation that algorithm experts will research improved techniques, software development experts will build simulation products, surveyors will collect data, and consultants or modellers will populate and run models. Modeling is a collaborative exercise. Yet modeling still tends to be managed as if it is an academic exercise, and it has not taken its rightful place as an essential tool for river managers. There is a need to make better use of IT, in order to release more of the power of modeling.
Advances in modeling
If modeling is to be used more extensively, it needs to become the core of decision support systems. The management of river basins has become more sophisticated. There is recognition of the competing demands of hydropower, irrigation, domestic supply, industrial supply, wastewater and stormwater collection, flood management, amenity use, water quality and ecological protection. In river basin management, there is a desire to understand all the implications of social changes and engineering works.
The situation is becoming extremely complicated, and there is a need for suitable decision support systems to advise and inform. Good decision support systems combine information about what is happening in the catchment with information on the impact of various management actions – what could happen in the future. It would be desirable for these systems to advise on the best course of action.
Simulation is clearly an important technology for decision support. Commonly, problems are divided into single issues and individual simulation models applied to each issue. Modelers have grown accustomed to this way of working, and take for granted the severe restrictions that this mode imposes on decision support.
The problem lies in the interactions between processes in river catchments. It is not possible to accurately represent these interactions as simple boundary conditions. So increasingly complicated combinations of models have been constructed, and many runs carried out in order to answer questions that cross the borders between different processes.
But what if there was a framework that would allow simulation, not just of the processes themselves but also the complex interactions or links between the processes? It would then be possible to answer complex, cross-boundary questions using far fewer simulation runs. The answer is integrated modeling – taking individual models created by experts in their fields and allowing these to be linked or combined through data exchange protocols, so that the integrated set can simulate the behaviour of all of the interconnected processes in a river basin. This means it should be possible to combine simulations of rainfall, hydrology, hydraulics, groundwater, runoff, channelled flows, surface flows, piped flows, water quality, ecology, economics and many other processes.
For years, people have built hard-wired, integrated models to be used for a specific purpose. These have not been flexible enough to be used for general decision support purposes. But now there is a new technology that allows models to be connected effectively and flexibly – the OpenMI data exchange standard. Although this is a new technology, its use around the world is increasing rapidly.
Wallingford Software has incorporated OpenMI into its river modeling system, InfoWorks RS, and its sewer modeling system, InfoWorks CS. This means that sewer models and river models can be easily and cost-effectively combined for analysing complex situations where sewers and rivers interact. Equally importantly, any other model that is OpenMI-compliant from any other supplier or researcher can be linked to these two standard models. So the possible combinations are endless.
There is a real need for integrated modelling, and at last it is possible in a flexible and cost-effective way. But it brings a requirement for greater computer power and it makes displaying results more complicated. Most of all, there is a need to reassess modeling procedures. When models are built today, they may be used as a standalone model, but increasingly they will be used for decision support in combination with other models. It is necessary to question the assumptions under which models are built and calibrated.
Those involved in modeling are aware of the long-standing problem of lack of data. River channel surveys are expensive, flood-plain data is difficult to source, and flow and level data are available at only a limited number of sites. Very few flow and level data are available for those critical periods of low-flow and flood, so models have been fairly difficult to build and calibrate.
However, some types of data can now be gathered by remote sensing, such as LIDAR and satellite mapping data, and water quality measuring instruments are cheaper and more reliable than ever before. This is bringing about a dramatic change to modeling, and new issues. Now, there are increasing amounts of data and a need for better data management techniques to ensure that it is utilised. Data needs to be referenced, re-used when necessary, and information extracted from it. Environmental modeling places major demands on data management and data management technology.
Better models can be built using the data available, and we can build more extensive models using integrated modelling. However, there are also changes to the understanding of models. Good modelers have an understanding of the compromises and assumptions they have made in building a model. But there is a worrying tendency to completely believe the results of simulation models, or to believe them in unsuitable circumstances.
Modelers are starting to face up to the implications of uncertainty. Given the uncertainty in all aspects of building and running models, it should be no surprise that there is uncertainty in the results. Yet it is notoriously difficult to quantify this. Formal statistical techniques tend to overestimate the uncertainty, and from the simplified statistical expression of uncertainty, the value of the simulation models is doubted. Clearly, those models do give useful results. Researchers and modelers are struggling with this dilemma. Solutions generally involve carrying out more simulation runs and clever use of results displays. In turn, this places more strain on modeling systems to allow more simulations and manage all the extra results.
Uncertainty leads to consideration of risk analysis. River managers have difficult choices to make, and one technique that can help is risk analysis. The simple explanation of the risk of a particular outcome is the likelihood of the outcome multiplied by the consequence. It is now widely accepted that risk analysis can be a valuable technique for flood management. Indeed, flood management has always involved consideration of risk, but to date very little formalized risk data has been available to managers. However, the European Flood Directive will be a risk-based directive and in other parts of the world similar guidelines are being adopted. Flood risk analysis tools are being produced, often following the ‘source, path, receptor’ pattern, and the output of these tools is usually some form of flood risk map, via GIS.
As yet, flood risk analysis is a crude and imperfect tool, and there are particular dangers in publishing flood risk maps. Several issues are involved. Firstly, ‘consequence’ is highly subjective. An urban planner may regard the most important consequence as being the cost value of flood damage to buildings and other infrastructure. An insurer is also interested in this cost, but will add to it the cost of life-insurance payouts. Emergency service planners are interested in the number of people affected and the damage to access routes.
Secondly, the causes of flooding in a particular area can be numerous, particularly in towns and cities. Analysis of all of these flood risks is difficult and time consuming, so tends to be undertaken for only one cause (such as fluvial flooding, sewer failure or coastal surge). Thirdly, the multiplication of probability and consequence gives a simplistic answer that identifies areas of high risk. However, in order to identify ways to reduce risk, the components of risk need to be deconstructed. For example, it may be pointless spending money improving coastal defences if the most likely cause of flooding is decaying sewers.
Finally, the likelihood of flooding can be very localised within a particular area - two properties separated by just 400m can have vastly different chances of being inundated. This means the values represented on flood risk maps are highly dependent on the granularity of the underlying analysis. Contrast this with weather mapping, for which those same two properties are extremely likely to experience the same conditions.
These are just some of the difficulties of flood risk analysis. An extreme example of the difficulties that could arise is if members of the public were to try to use urban planning risk maps to identify safe places to evacuate if a flood were imminent. They could simply end up in the nearest high-value location, which might not be safe at all. People instinctively know that risk analysis is a useful technique, and the publication of inundation probability maps saves lives and money. But extreme care must be taken that risk outputs are only used for the purposes they were created.
Risk analysis systems must have the flexibility to take these issues into account, or have severe limitations on their sphere of use. Clearly, there is much more work to do here. One certain outcome is that risk analysis seems to generate the need for more simulation runs, even many more runs under Monte Carlo analysis. The quality of the simulations should not be degraded below an acceptable level, so instead we put increased demand on IT systems.
The desire for computer simulations in river management is increasing dramatically. Does IT offer any answers? Of course, processor power increases year on year, but not by as much as desired. It is also necessary to be smart about the way IT is used. For instance, more attention should be paid to quality control of infrastructure data and model data. So much time is lost, and management mistakes made due to lost data and uncertainty about model results.
It should always be possible to trace back and identify all input data used to create model results, and all models should be version controlled. The InfoWorks RS river modeling system uses advanced database technology to provide version control and an audit trail for all river models. As explained, modeling has become a collaborative effort, involving many different people over time. Multi-user database technology, as provided by InfoWorks RS, is required to provide order in this complicated working environment.
With larger modeling teams, and the desire to use a lot of data from widespread sources, it is inevitable that modelers will want to make use of the cheap and robust data communications enabled by the internet. Improvements to underlying data are driving dramatic changes in river management. And countries that are developing their infrastructure rapidly, such as China, are particulary keen to adopt the internet within river management.
Some engineers regard modern user interfaces as unnecessary. They miss the point that better user interfaces allow the same views of the river network and flows to be shared. This is extremely valuable, ensuring that everyone has the same understanding of potential problems and solutions, and allowing engineers to make their points to decision makers. So the dramatic user interfaces now available for river models, using GIS views, long views, 3D views, contouring and animated flows should be welcomed.
It would be desirable for these modern user interfaces to be available over the internet for thin-client systems. To date, this has not been possible. Browser interfaces have been suitable for simple tasks, but not flexible enough for modeling. This is all changing, and soon browser interfaces suitable for many modeling tasks will be available.
The internet also represents a change in the way people consider computing power. For two decades, PCs have given the best value computing power available to river modelers. One or two organisations have taken a different route, relying on centralized mini computers, or even super-computers, for their modeling power. But most people have relied on the PC on the desk. Yet now there is a tendency to expect valuable model data to be saved on a data server somewhere else on the network.
Modern database technology allows this to happen without the user needing to worry about the location of the server. And the same will happen with processing power. Why should modelers worry about where their models are running – why should they even know where? Increasingly, they won’t have to. The simulation processing will take place elsewhere on the network. Much has been made of the potential benefits of grid computing, but the benefits of smaller ‘server farms’ are already available to modeling teams.
There will also be a rise in the demand for integrated modeling. It is necessary to ask if individual models and data sets need to exist on a single computer. People would like to be able to integrate models and data sets that exist on different computers, in different countries if necessary. While technology does allow this, there are still practical difficulties and data communication delays that mean that integrated modelling over internet is still a difficult proposition. Perhaps that will change soon.
Modelling has evolved from an academic exercise for researchers into an important river modeling tool. The demand for modeling services is now increasing, and modelling systems need to better utilise the IT available in order to meet the demand for modeling in a safe, flexible way.