Affordable Access

Autonomous multi-agent reconfigurable control systems

Publication Date
  • Qa75 Electronic Computers. Computer Science
  • Computer Science
  • Design
  • Medicine


This thesis is an investigation of methods and architectures for autonomous multi-agent reconfigurable controllers. As part of the analysis two components are looked at: the fault detection and diagnosis (FDD) component and the controller reconfiguration (CR) component. The FDD component detects and diagnoses faults. The CR component on the other hand, adapts or changes the control architecture to accommodate the fault. The problem is to synchronize or integrate these two components in the overall structure of a control system. A novel approach is proposed. A multiagent architecture is used to interface between the two components. This method allows the system to be viewed as a modular structure. Three types of agent are defined. A planner agent Ap, a monitor agent Am and a control agent Ac. The monitor agent takes the role of the FDD component. The planner and control agents on the other hand take the roles of CR component. The planner decides which controller to use and passes it on to Ac. It also decides on the parameter settings of the system and changes it accordingly. It belongs to the reactive agent category. The planner agent's internal architecture maps its sensor data directly to actions using a pre-set rule based conditional logic. It was decided that this architecture would reduce the overall complexity of the system. The monitor agent Am belongs to the learning agent category. It uses an algorithm called adaptive resonance theory neural network or ART-NN to autonomously categorize system faults. Am then informs the other agents of the fault status. ART-NN was chosen due to the fact that it does not need to be trained with sample data and learns to categorize data patterns on the fly. This allows Am to detect unmodelled system faults. The control agent Ac also belongs to the learning agent category. It uses a multiagent reinforcement learning algorithm to learn a controller for the system at hand. Once a suitable controller has been learnt, the parameters of the controller are passed to Ap for it to be stored in its memory and learning is terminated. During control execution mode, controller parameters are sent to Ac from Ap. The novel approach is demonstrated on a case study. Our laboratory-built 4-wheeled skid-steering vehicle complete with sensors is designed as a way of demonstration. Several faults are simulated and the response of the demo system is analyzed.

There are no comments yet on this publication. Be the first to share your thoughts.