Background
Large language models (LLMs) are advanced artificial intelligence (AI) systems trained on large amounts of text data to generate human-like responses for their end user. While they are incredibly advanced, they can sometimes produce inaccurate, biased, or harmful content. These shortcomings can be detrimental if left unchecked. By leaving opportunities for human intervention, the AI’s outputs can be evaluated to ensure the responses are safe, accurate, and ethical.
Invention Description
Researchers at Arizona State University have developed CPS-LLM, an LLM that is retrained using instruction tuning framework. It ensures that generated plans not only align with the physical system dynamics of the CPS, but are also safe for the human users. The CPS-LLM consists of two main components: 1) a liquid time constant neural network-based physical dynamics coefficient estimator that can derive coefficients of dynamical models with some unmeasured state variables; 2) the model coefficients are then used to train an LLM with prompts embodied with 20 traces from dynamical system and the corresponding model coefficients. When the CPS-LLM is integrated with a contextualized chatbot such as BARD, it can generate feasible and safe plans to manage various external events.
Potential Applications:
- Automated insulin delivery systems
- Automated commercial manufacturing
- Unmanned cars & aerial vehicles
Benefits and Advantages:
- Increased safety
- More accurate recommendations
- Easily enables human intervention and approval for automated processes
Related Publication: CPS-LLM: Large Language Model based safe usage plan generator for human-in-the-loop human-in-the-plant Cyber Physical System