Reinforcement Learning Approaches for Autonomous Guidance and Control in a Low-Thrust, Multi-body Dynamical Environment

Nicholas Blaine LaFarge, Purdue University

Abstract

Many far-reaching objectives and aspirations in space exploration are predicated on achieving a high degree of autonomous functionality. Traditional Earth-based mission operations are prohibitively expensive and may hinder the realization of various deep space mission objectives that necessitate swift decision-making capabilities despite extended communication delays. The ability of future space missions to overcome this challenge hinges on the development of onboard algorithms that maintain low computational costs without compromising on the precision necessary to ensure mission success. In developing autonomous functionality in safety-critical environments, recent progress has demonstrated the considerable potential of artificial neural networks as key elements in automation across a wide range of domains and disciplines. In the context of astrodynamics applications, evaluating the efficacy of employing a neural network onboard requires the identification of areas of applicability, thereby recognizing that these promising computational models are not universally suitable for every operation. After determining the application scope of neural networks, two interconnected objectives must be subsequently addressed. The first task involves selecting and establishing an appropriate training method for constructing the neural network. After training, the most effective strategy to utilize the neural network must be devised and examined through a systems-oriented lens. The overarching goal of this investigation is to address these objectives by demonstrating the versatility and promise of reinforcement learning as a machine learning framework that enables automated decision-making for challenging tasks in complex dynamical regions of space. Moreover, this research seeks to establish a class of composite algorithms that employ neural networks to augment and improve conventional methods, ultimately yielding an efficient and resilient strategy for autonomous spaceflight. A notable challenge in automating space missions is the accommodation of off-nominal occurrences onboard that are typically addressed ad hoc by a team of specialists. In particular, determining maneuver plans in real-time for low-thrust mission applications in cislunar space remains challenging, particularly when confronted with unanticipated events. Many current low-thrust guidance and control approaches rely on either simplifying assumptions in the dynamical model or on abundant computational resources. However, future missions to complex multi-body regions of space, such as the Earth-Moon neighborhood, require autonomous technologies that take advantage of the nonlinear dynamics to generate low-thrust control profiles without imposing an onerous workload on the flight computer, all while still ensuring that fundamental mission requirements are satisfied. This investigation addresses this challenge by leveraging neural networks trained via reinforcement learning to enhance onboard computational maneuver planning capability. The proposed reinforcement learning techniques function without explicit knowledge of the dynamical model, thus creating flexible learning schemes that are not limited to a single force model, mission scenario, or spacecraft. Moreover, this investigation develops a hybrid framework for low-thrust maneuver planning that capitalizes on the computational efficiency of neural networks and the robustness of targeting methods, thereby establishing a novel blended approach to autonomously construct and update maneuver schedules that satisfy mission requirements.

Degree

Ph.D.

Advisors

Howell, Purdue University.

Subject Area

Artificial intelligence|Astronomy|Educational administration

Off-Campus Purdue Users:
To access this dissertation, please log in to our
proxy server
.

Share

COinS