This project built a single-stage detector for occluded pedestrian. We learn full-body region from visible parts and use anchor calibration to reduce gaps of these two tasks. Over SoTA alternativs on CityPersons  dataset, our model achieved comparative miss rate with 10.8 for reasonable occlusions and 42.0 for heavy occlusions, and offered comparable detection speed of 0.22 seconds for images with high-res of 1024 x 2048. Over model generalization, our model can generalize to various scenarios like shopping mall and beach as shown in the above demos. The model was trained on three public datasets including Citypersons , Caltech  and ETH .
 S. Zhang, R. Benenson, and B. Schiele, “Citypersons: A diverse dataset for pedestrian detection,” CVPR, 2017.
 P. Dollar, C. Wojek, B. Schiele, and P. Perona, “Pedestrian detection: An evaluation of the state of the art,” TPAMI, vol.34, 2012.
 A. Ess, B. Leibe, and L. Van Gool, “Depth and appearance for mobile scene analysis,” in ICCV, 2017.