RISK-BASED CONTROL OF THE NEGATIVE EFFECT OF DISCONTINUED AUTOMATED PROCESSES – A CASE FROM THE AGRICULTURAL DOMAIN
The emergence of ICT technologies in the modern era has facilitated and ameliorated the execution of critical business processes. Agriculture is a domain where automation is present in various forms, important and enables farmers increase productivity and improve environmental policies. However, the “industries are faced with numerous types of natural and man-made threats and disruptions” (Maboudian & Rezaie, 2017). As a consequence, effective policies against such threats should be developed.
Jméno a příjmení autora:
Risk management, business continuity, agriculture, environmental hazards, availability
DOI (& full text):
The current paper delineates a modern algorithmic procedure for estimating the risk and calculating a realistic duration of interrupted critical computerized business activities, in order to mitigate…více
The current paper delineates a modern algorithmic procedure for estimating the risk and calculating a realistic duration of interrupted critical computerized business activities, in order to mitigate or prevent their corresponding negative consequences. The contribution is formulated via merging risk management and business continuity concepts. The formulation of an integrated business continuity management policy includes the proactive determination of approximate recovery timeframes for critical business functions. Practically, this estimation is based on recovery tests which are executed under ideal conditions, and unexpected factors which may emerge during a real process interruption and signiﬁcantly delay its recovery are ignored. Agriculture is a domain where the incorporation of an integrated business continuity management system is a crucial issue. The interruption of agricultural computerized activities can be triggered by and can result to various undesirable environmental phenomena. Thus, especially for agriculture, the consideration of unexpected factors when executing recovery tests is highly demanded. The currently presented algorithm accepts as initial input the estimated recovery time which is based on recovery exercises executed under ideal conditions. Then, a precise number of potential unpredictable hazards (factors) are taken into consideration and the risk magnitude of each threat is semi-quantitatively estimated. The total risk magnitude is utilized to estimate the time deviation from the initially deﬁned recovery time. After the risk analysis process is terminated, a new recovery timeframe is proposed. The time deviation from the initially deﬁned recovery time is calculated in its absolute value. The algorithm is ﬁnally validated by applying the calculated extended timeframe to the system availability formula which measures the achieved system availability levels for any information system. The validation of the approach is demonstrated via a practical case study from the agricultural domain, namely the greenhouse irrigation scheduling system interruption scenario.