At a briefing for the APEurope Correspondent's Pool, The George Boole Foundation has provided some details on how Artificial Intelligence will become an increasing component of the capabilities of the SDGToolkit and how training will raise the competence of trainees to contribute to advancing the state-of-the-art of due diligence and analytical tools.
Attendees welcomed the demystification of the topic of AI and the very practical approach to be adopted by SDF training projects.
What is AI?
Too much discussion, seminar content and even workshop exchanges on AI fail to define what it is. Therefore, for many it is a bit of a mystery. In basic terms AI is the automation of handling of information by digital devices such as computers. The "artificial intelligence" dimensions simply register the fact that AI emulates the natural way humans rationalize or deduce and take decisions.
The contribution of George Boole
The fundamental mathematical logic of digital devices was explained by George Boole after a long dedicated study of how humans deduce and long before digital devices existed. In 1854 a book by Boole was published entitled, "The Laws of Thought on which are founded The Mathematic Theories of Logic and Probabilities"". This used binary logic to explain how humans reach conclusions related to logic and uncertainty. 83 years later, Claude Shannon wrote a paper in 1937 based on his 1937 PhD thesis entitled, "A Symbolic Analysis of Relay and Switching Circuits" where he explained how Boolean Logic could contribute to a more efficient design of electrical circuits. This explanation contributed to the launch of Boolean logic into the world of integrated circuit design and digital programming. The process of Boolean logic is still used to design the logic of integrated circuit layouts and the process of Boolean reduction is still used to optimize the final integrated circuit logic design. Today, Boolean Logic runs more than 99% of the computers, communications networks and the World Wide Web.
Therefore, AI is the emulation of human logical procedures and is applied, in the main to support decision-making by exploring options of possible outcomes associated with different premises. AI can also work on the basis of achieving defined objectives such as optimization of a decision according to decision preferences. |
|
|
An emphasis improving practical capabilities of teamsApplying the process approach the training systems within SDF Projects involves repetitive reassessments of design and operational decisions. To lower risk many such decisions are simulated using ATs. ATs are determinant models which relate the main input determinants such as water, nutrients or temperature to a desired output such as the yield of a crop. At any particular time analytical tools are based on the current understanding of cause-and-effect relationships. With time, as a result of advances in knowledge, analytical tools need to be adjusted to take into account additional determinants or mofified formulae resulting in the need to alter AT algorithms.
AI?Artificial intelligence is the use of digital logic to emulate how humans rationalize and come to conclusions (deduction). In the early and later 1980s there were significant initiatives into so-called 5th Generation computing or AI following the declaration by Japan that it would become the knowledge engineering centres of the world. This was declared in the MITI ICOT Report published in 1982. Although there were Sputnik-like reactions in the USA, UK and by the European Commission, the actual work on AI did not advance significantly and was overtaken by the normal evolution and progress in computer technology.
Decision analysis
In the early 1960s Ronald Howard and co-workers in the Decision Analysis group at Stanford Research Institute, developed the discipline of Decision Analysis based on determinant models. This made use of a wide range of operations research techniques to simulate outcomes of decision options in order to identify preferable solutions to defined problems. Use was also made of cybernetics models with feedback loops, or memory, to emulate learning systems.
In 1983 SEEL-Systems Engineering Economics Lab was set up by Hector McNeill to monitor digital applications development linked to global networks. In 1985 SEEL purchased the bulk of the output of the research findings of the SRI Decision Analysis Group. The purpose was to use decision analysis as a basis for developing analytical tools as decision analysis models to build AI capabilities. |
|
|
During the period 1984 through 1987 Hector McNeill was a temporary agent at the Information Technology & Telecommunications Task Force (ITTTF) in Brussels heading a learning systems initiative. McNeill's interpretation of a learning system was a system that supports a broader constitutional context by supporting government, business and individuals advancing their state of knowledge and ability to take rational decisions in the democratic arena, business decision making and personal advancement.
McNeill's objective was to take advantage of the empirical evidence that around 80% of real economic growth is the result of:
- learning and the accumulation of applied experience raising tacit knowledge (competence)
- the generation of explicit knowledge (data and information) with which to analyse and communicate
- the identifications of ways to improve existing procedures and processes
- innovation resulting from the design and implementation of new procedures
- the promotion of sustainable growth in real incomes
|
The collection of data on advancing knowledge of cause-and-effect and tacit knowledge providing practical insights into the current boundaries of feasible action are the information required for the design of deterministic decision analysis models for AI. For this to operate effectively there needed to be a way to preserve state-of-the-art capabilities (feasibilities) in a precise and quantitative fashion as well as to have procedures and methods to identify and specify the data set required and upon which algorithms would operate to support decisions used to identify feasible and beneficial change.
In 1985, McNeill identified two significant components which he considered to be essential for the successful development of practical operation of such systems:
- The Accumulog (the same as blockchain) as an immutable cumulative recording and recall database
- Locational-state analysis as a basis to ensure the integrity of information used in decision analysis to avoid wrong information being sent over global networks to data users as well as to ensure veracity of information used in decision making as a primary level requirement for ethical and precise decision making
|
Locational-state analysis has since developed by McNeill, working at SEEL, into Locational-State Theory (LST) becoming an essential ingredient of AI systems dealing with renewable natural resources knowledge acquisition and the management of sustainable natural resources and agricultural systems under climate change. One of its contributions is to help define survey and data collection typologies as a function of data collection objectives answering the question what is being decided with this data and how precise does the data need to be to achieve reliable decisions (LST has a dedicated website:
Locational-State Theory)
As can be appreciated this has ethical implications because the correct application of LST can increase the proportion of explained variance in data collected and as a result make far more precise estimates improving the utility of decision-making in a world facing an existential crisis associated with climate change. To base decisions on less complete data creates an unethical state of affair where unreliable conclusions or advice will result from analyses and raising risk.
Accumulog technology has been built into the recently released
SDGToolkit
Data Reference ModelsThe development of decision analysis or AI models involves a systems approach (multidisciplinary) combined with the agricultural extension systems approach. This aims to introduce innovation through demonstrations and adaptation to farm conditions. After a prolonged period working in this field, McNeill decided to solve a common problem facing systems groups needing to share key information on determinant relationships and results of their interactions with stakeholders, policy makers and information technology experts. Each group uses different terminologies to describe the same relationships causing difficulties in communication. The IT model languages were too IT-specific and difficult for non-IT personnel to understand. In 2014, McNeill gave a presentation at one of the Decision Analysis Initiative 2010-2020 workshops, where he proposed the Data Reference Model (DRM)
1 as a way of improving systems group communications by mapping out and communicating requirements and systems functionalities in a more intelligible and inclusive way to engage all within mixed competency groups.
Figure 1: Data Reference Model structure
 |
|
|
In explaining "Why the DRM?" McNeill stated that,
" Too many practitioners accept what is in applications but do not always fully understand what the algorithm is doing; it is a black box. This is an unethical state of affairs for any professional area. Those applying such applications must understand why and how the algorithms work otherwise they can have no confidence in the results generated". |
A DRM is a simple structured communications medium for mixed competency groups who share a common objective of agreeing on the specifications and operational structure of an information system. The most common application is in support of standardized target data, such as coefficients or indicators applied to establish comparative measures of performance. In the case of each measure or indicator, each has a separate DRM because the data requirements for each indicator are usually distinct or the calculation methods differ.
A DRM (see Figure 1) contains descriptions of all processes involved in a systems operation to provide a transparent reference for use by all involved including administrators, application domain experts, technical personnel, statisticians, survey designers, information technology specialists and stakeholders concerning the structure and processes involved. This level of transparency is intended to support assessment of feedback on any details within a DRM by stakeholders so as to end up with a commonly agreed and understood system.
The DRM is a tabular ladder-like description of an information process running from where and how specified data is collected, processed and output to target data of the final form such as an indicator. The DRM combines the methodologies applied to collected data to calculate indicator values in the form of formulae, equations and algorithms consisting of mathematical logic, algebraic symbols and operators but also contains narratives and descriptions for non-technical stakeholders.
Building a DRM
The process of filling in the DRM in the case of an indicator is to “walk through” the description of each indicator by following the grey arrows from top to bottom, in the description column and in sequence describe:
- What is being determined or measured, or what is the indicator?
- How is it calculated?
- What data is used to calculate it ?
- How does the data move from its collection point to where it is recorded and calculated?
- What is the method of data collection?
- Where is the data and when does the collection need to take place?
|
Once these descriptions are agreed, the DRM is completed by another "walk through" following the blue arrows from bottom to top, specifying these descriptions in the description column into data specifications in the specification column.
- From where and when is data collected?
- Is this by full population or sample survey or other?
- What are assumed data storage and transmission system and protocols?
- What is the data element specification in terms of:
- property (identity/name)
- metrics (unit of measure)
- type (numeric, text, logical)
- precision (length of data and the scale or number of digits after the decimal point) Although precision usually applies to decimals in the DRMs this is also applied to minimum precision (length) of text. The format applied is of the form 5/2 which in the case of a decimal signifies a total number of digits of 5 of 2 follow the decimal point. In the case of text, the format is 40/0 which indicates a minimum number of spaces for 40 letters
- What is the method of calculation (algorithm, formula, equation) using the data elements to estimate the indicator value?
- What are the units or measures (dimensional expression) of the indicator
|
Because sometimes the specification column content is somewhat technical/mathematical the narrative column is completed with a simple description of the content of the specification column. To ensure that this column is understood by all, and in particular, non-technical stakeholders, it is best to complete this column as a group to ensure there is agreement on the understanding of this column and therefore the whole DRM.
Precision
The reference to precision in relation to data elements does not refer to statistical precision or accuracy but rather to the IT data definition as dimension (length) of numbers and precision in terms of the number of decimal places included in the case of decimals in recording data. This is of importance in defining the levels of detail of information to be collected so that rounding up and down which occurs in calculation does not introduce errors of representation. Also, precision ensures that what enters the information management system is coherent within the design of the database that stores primary data and where precision is a required specification. Primary data sets of data elements require minimum standards of precision to reduce errors in the indicator value estimates.
Figure 2: SDF training approach
Although all recommendations and procedures are based on explicit knowledge, the OQSI training guidelines emphasize the development of trainee cumulative tacit knowledge through repetitive cycles of action, observation, adjustment and improvement of tasks. This approach is based on SEEL's instructional simulation approach and the ISO process approach.
However, quite often field observations point to a need to alter algorithms applied and this is completed though team and stakeholder contributions to the Data Reference Model (DRM).
This approach results in a constant increase in quality standards and professional competence as a result of the learning curve.

Source: ISO 9001; OSQI 2020; SEEL IS:2020; OQSI 2021; SEEL 2021. |
|
|
It is assumed that the primary data collected in a survey, as well as any additional data required and available from other sources, makes up the primary data of each DRM.
Systems engineering, training in decision analysis and AI
DRMs came about because some of the solutions for specifying systems proposed by information technologists have often been somewhat difficult to comprehend on the part of non-IT specialists. On the other hand, IT specialists need the input of domain specialists to identify required datasets and methods of data collect to configure systems in the right way. Since it is often non-IT specialists who are decision-makers concerning the commitment of resources to the implementation of systems, there is a need for a better communications medium that enables all aspects of a systems design to be presented within one document. The interfaces between each section need to be transparent enough to enable how a need at one level is handled by a solution at the next level down. The DRM therefore provides a means of presenting a cascade of needs and solutions at all levels of a systems operation.
ApplicationsAlthough accepted by the OQSI as a recommended technique DRMs have not found widespread application. However, the George Boole Foundation and SEEL have applied them in hundreds of initiatives including agricultural policy projects funded by the European Commission in Central Europe and recently in significant international non-profit initiatives funded by standards and certification groups in the private sector linked to Sustainable Development Goals.
In terms of the operational units within the George Boole Foundation, SEEL uses DRMs to complete stakeholder assessment of analytical tools destined for the SDFToolkit. Of the more than 200 identified, some 65 have been completed. The time delays relate to securing systems groups with the relevant membership experience relating to each AT to contribute to the DRM exercise. The level of focus the DRM provides means the team sessions can be relatively short and effective.
The role of DRMs in trainingBecause of its powerful communications role DRMs are considered to be excellent devices for training of project teams.
The OQSI made DRMs a recommended procedural method in 2015.
In the article,
"The SDF's practical approach to training" the SDF adopted approach to training was described as being based on the process approach a quality improvement technique recommended by ISO, BSI and OQSI. The process approach is essentially made up of the steps shown in the Figure 2 within the large grey circle as the repetitive sequence of steps marked as: Plan --> Do --> Check --> Act . However, each time the Act function is transformed into a DRM exercise involving the team who can review outcomes to consider ways to improve the system and the algorithm by checking its relevance and ability to emulate what is observed in practice and to take any needed actions to improve the AI component. The very interesting aspect of this approach is that trainees find it easy to understand and the process. The comparisons of alternative AI algorithmic configurations provides a basis for instructional simulation an approach promoted by SEEL for on-the-job training. Note the feedback into the Guidance or instructions briefs. The involvement of trainees in the DRM process helps to add clarity, for them, as to why Guidance needs to adjust and as a result creates no difficulties.
The advantages of the process approachThe benefits of the process approach include an integration and alignment of all processes to the achievement of objectives with all efforts being focused on process effectiveness and efficiency. There is an improvement of confidence of donors and management in the consistent and rising performance of teams. Overall, there is a transparency of operations leading to learning to lower costs, reduce delays and use resources more effectively resulting in improved, consistent and predictable results. An essential outcome is the constant identification of ways to improve overall performance leading to practical innovation. This open system encourages a full team involvement and well-defined responsibilities leading to a high degree of professional satisfaction. 
What has all of this got to do with agricultural innovation?
As in the case of LST, DRMs enable mixed groups to embrace and manage the complexity of agricultural production systems. The process approach and the DRM device are in reality another version of the agricultural extension system approach were innovation occurs as a result of the accumulation of knowledge, observing and measuring what is feasible in practice. Best practice is identified, measured and modeled as a basis for giving advice to farmers. At any particular time the DRM contains a summary of the sum total of current knowledge on agricultural production systems relevant to a particular algorithm or AI application; there is no mystery and objectives and means are exposed in a transparent fashion; this is the foundation of a learning system linked to agriculture.
This model is a rational basis for taking decisions on production options in a global state of affairs where income disparity is rising, sustainability falling and temperature continuing to rise. The 2019 Sustainable Development Report pointed out that the current project portfolio was failing to counter rising inequalities, falling sustainability and rising temperatures (SDGs 10, 12 & 13). AI with the mystique stripped away can continue to solving these issues. Because of the existential nature of the challenges facing the human race and the planet the subject of ethics in decision making is very significant. To take ethical decisions requires a refined knowledge of cause and effect and the discipline of putting together the due diligence design procedures and building the analytical tools that help improve decision making based on the best available facts.
The imperative is to find ways and means to secure human survival and sustainable agricultural innovation is a central factor in improving our ability to achieve this goal. The SDF training and integrated development environment aims to contribute to raising professional standards through the development of capabilities in decision analysis based on carefully assembled and analyse evidence so as to bring about improvements in the Agenda 2030 project portfolio performance.
1 McNeill, H.W., "Improving communications within systems groups", Decision Analysis Initiative 2010-2015, Portsmouth, August, 2014.- click to accessUpdated: 1st June 2021.
APEurope.org