With the recent improvements in 3-D capture technologies for applications such as virtual reality, preserving cultural artifacts, and mobile mapping systems, new methods for compressing 3-D point cloud representations are needed to reduce the amount of bandwidth or storage consumed. For point clouds having attributes such as color associated with each point, several existing methods perform attribute compression by partitioning the point cloud into blocks and reducing redundancies among adjacent points. If, however, many blocks are sparsely populated, few or no points may be adjacent, thus limiting the compression efficiency of the system. In this paper, we present two new methods using block-based prediction and graph transforms to compress point clouds that contain sparsely-populated blocks. One method compacts the data to guarantee one DC coefficient for each graph-transformed block, and the other method uses a K-nearest-neighbor extension to generate more efficient graphs.