To test the accuracy and efficiency of our proposed broad learning structures in the classification,we give priori knowledge about the numbers of feature nodes and enhancement nodes.However, this is exactly the normal practice for building the network in the deep learning neural networks which is in fact the most challenging task of the whole learning process. In our experiments, the network is constructed by total 10×10 feature nodes and 1×1000 enhancement nodes.For reference, the deep structures of SAE, DBN,DBM, MLELM, and HELM is 1000-500-25-30, 500-500-2000, 500-500-1000, 700-700-15000 and 300-300-12000,respectively. Although the 98.74% is not the best one,  (in fact, the performance of the broad learning is still better than SAE and MLP), he training time in the server is very fast at a surprising level which is 29.9157 seconds,while the testing time in the server is 1.0783 seconds. Moreover, it should be noticed that the number of feature mapping nodes is only 100 here.This result accords with scholar s intuition in large scale learning that the information in the practical applications is usually redundancy.