We survey a variety of recent results concerning local optima of penalized M-estimators, designed for high-dimensional regression problems. The nonconvexity is allowed to arise in either the loss function or the regularizer. Although the overall landscape of the objective function is nonconvex in high dimensions, we show that both local and global optima are statistically consistent under appropriate conditions. Our theory is applicable to settings involving errors-in-variables models and other contaminated data scenarios. We also discuss statistical and optimization theory for nonconvex M-estimators suited for robust regression in high dimensions.