Title: Inexact Newton Method for Nonlinear Constrained Optimization Inexact Newton methods play a fundamental role in the solution of large-scale unconstrained optimization problems and nonlinear equations. The key advantage of these approaches is that they can be made to emulate the properties of Newton's method while allowing flexibility in the computational cost per iteration. Due to the multi-objective nature of *constrained* optimization problems, however, that require an algorithm to find both a feasible and optimal point, it has not been known how to successfully apply an inexact Newton method within a globally convergent framework. In this talk, we present a new methodology for applying inexactness to the most fundamental iteration in constrained optimization: a line-search primal-dual Newton algorithm. We illustrate that the choice of merit function is crucial for ensuring global convergence, and discuss novel techniques for handling non-convexity, ill-conditioning, and the presence of inequality constraints in such an environment. Preliminary numerical results are presented for PDE-constrained optimization problems.