We propose an autoencoder with graph topology learning to learn compact representations of 3D point clouds in an unsupervised manner. As discrete representations of continuous surfaces, 3D point clouds are either directly acquired via 3D scanners like Lidar sensors, or generated from multi-view images or RGB-D data. Different from 1D speech data or 2D images, which are associated with regular lattices, 3D point clouds are usually sparsely and irregularly scattered in the 3D space; this makes traditional latticed-based algorithms difficult to handle 3D point clouds. Most previous works discretize 3D point clouds by transforming them to either 3D voxels or multi-view images, causing volume redundancies and the quantization artifacts. As a pioneering work, PointNet is a deep-neural-network based method that uses pointwise multi-layer perceptron followed by maximum pooling to handle raw 3D points and achieve remarkable performances in many supervised tasks, including classification, segmentation and semantic segmentation of 3D point clouds.