Accurate localization of natural gas pipeline leakage points is of great significance for emergency response to accidents. In view of the defects of existing leakage location methods that rely on instantaneous signals, they are easily disturbed by pipeline operation noise, and have insufficient positioning accuracy, a pipeline leakage location method based on TPE (tree-structured Parzen estimator)-XGBoost (eXtreme gradient boosting) model is proposed. This method utilizes pressure drop rate data collected at 5-second intervals after leakage to train and predict the model. The hyperparameter optimization performance of TPE is compared with PSO (particle swarm optimization), BOA (Bayesian optimization algorithm), and Optuna methods, addressing the challenge of hyperparameter search in the model. Additionally, the prediction accuracies of the TPE-XGBoost model are compared with those of SVM (support vector machine), CNN (convolutional neural network), and CNN-LSTM-Attention models. The effects of dataset time series length and superimposed running noise on localization accuracy are also analyzed. The results demonstrate that the TPE optimization algorithm enhances the accuracy of the XGBoost model, outperforming PSO, BOA, and Optuna in hyperparameter optimization. Compared to other localization models, the TPE-XGBoost model achieves the highest prediction accuracy, with an R2 value of 0.9835 and a localization error of only 3.77% on the test set. In contrast, the R2 values for SVM, CNN, and CNN-LSTM-Attention models are 0.3684, 0.9285, and 0.9821, respectively. Analysis of the impact of time series length reveals that, within the data length range of 30 to 150 seconds, model decision coefficients increase with longer time series, and significant differences in model hyperparameters across time lengths are observed. When the experimental group with pipeline running noise is introduced, all models' accuracy decreases, while the model configuration adapts by adjusting hyperparameters to extract valid information through increased segmentation. In this noise-added experimental group, the R2 of the XGBoost model is 0.9580 and the localization error is 7.58%.