You can be confident that your flooding data is up to date, correct, and ready to use.
A holistic metadata framework ensures the providence of your flooding datasets.
Archiving, versioning, rollback, and quality levels ensure your datasets are tracked, rated and fit for purpose. Dedicated tools streamline data inflow and outflow.
As your flooding data continues to change over time, you can drill through these changes at any location, or even go back to previous state.
Centralise disparate flooding datasets into a single framework to ensure that all users are accessing the correct datasets.
Dedicated tools help you manage study overlap, mixed detail levels, and inconsistency in your flooding datasets to ensure that your end users do not need to. They can just rely upon the datasets they have access to.
Mastergrids enable you to provide seamless access to all of your flooding data, at once, and with unrivalled speed.
An extensive metadata framework ensures that your data sources are never lost and maintain their providence.
Commonly used fields include: Study details (Name, Author, Date, Coverage extents), Model Type, Model Detail, Model Quality – a measure of the quality of the modelling (resolution, model type etc), Study Quality – a measure of the quality of type of study (formal flood study, flood impact assessment, rapid hazard assessment, etc), Overall Quality – a combination of Model Quality and Study Quality, Flooding Type (Riverine, Overland and Storm Tide), Design Events modelled, Scenarios modelled (calibration, climate change, blockage), Catchment ID, Locales covered by study, Status (draft, adopted, retired, rollback etc), Information type (peak or time series), Reference GIS layers, Study reports and references.
Versioning and publishing allow you to manage what datasets various users in your organisation have access to and the way in which they interact with them.
Create varied projects to ensure that different end user types can access the datasets they need without being confused by sundry information.
Specialised tools allow you to supply datasets to external users. Just draw a polygon and export.
All data export processes are tracked in the database.
Equally, when new/updated seb-datasets are returned (eg a development assessment), you can seamlessly merge the new datasets into you existing datasets.
Single point of truth, consistent, data receipt, tracking, and exporting.
Versioning, "point in time" data history (legal challenges)
Centrally manage study overlap, quality, consistency as well as data updates and data replacement.
Maintain a full, searchable data history.
End users just "open and use". No need to locate specific layers.
Advanced data streaming routines ensure your datasets are fast, no matter the size.
Download a technical paper presented by Cameron Druery at the Floodplain Management Australia Conference in Canberra, 2019:
The need to manage the looming "big flood data" problem.