Loading...

Table of Content

    20 November 2021, Volume 50 Issue 11
    Environment Perception for Intelligent Driving
    What can surveying and remote sensing do for intelligent driving?
    LI Deren, HONG Yong, WANG Mi, TANG Luliang, CHEN Liang
    2021, 50(11):  1421-1431.  doi:10.11947/j.AGCS.2021.20210280
    Asbtract ( )   HTML ( )   PDF (2561KB) ( )  
    References | Related Articles | Metrics
    From the automatic driving vehicle to the intelligent vehicle, and then to intelligent connected vehicle, the intelligent driving vehicle technology and industry has developed rapidly, surveying and remote sensing technology have played an important supporting role. Firstly, this paper introduces the development of intelligent vehicles at home and abroad and their differences from conventional vehicles. Based on the perspective of three important waves of scientific and technological development, this paper comparatively analyzes and summarizes the development process and core technology driving forces of autonomous driving and surveying remote sensing. The intelligent driving vehicle’s key technologies are then introduced from four aspects: top-level planning and policy environment, environment perception and computing decision-making vehicle key technology, high definition map and navigation positioning basic support key technology, and vehicle road collaborative information interaction key technology. Several advanced navigation and positioning technology achievements, such as satellite-based navigation enhancement, high precision position and attitude measurement, multi-source fusion perception, are taken as examples to demonstrate that surveying and mapping can enable intelligent driving of a single car. By introducing mobile measurement and crowdsourcing high precision map-making technology, the key supporting role of surveying and remote sensing for intelligent driving is expounded. This paper takes the construction of space-based information real-time service system as an example to describe the development trend of how surveying, mapping, and remote sensing will serve the “person vehicle road cloud” of intelligent connected vehicles in the future.Finally, the paper summarizes and analyzes the problems that need to be solved in developing intelligent (connected) vehicles and the challenges that need to be addressed in the near future.
    A generalized data model of high definition maps
    ZHANG Pan, LIU Jingnan
    2021, 50(11):  1432-1446.  doi:10.11947/j.AGCS.2021.20210254
    Asbtract ( )   HTML ( )   PDF (13795KB) ( )  
    References | Related Articles | Metrics
    HD (high definition) maps have gradually become an indispensable part of autonomous driving, but there is no unified data model and standardize way of description, especially in the map production and data exchange stages. In response to this problem, the paper first analyzes the advantages and disadvantages of the current HD map data models such as NDS, OpenDRIVE and LaneLet, and then proposes a generalized data model for HD maps called Whu map model. In terms of the lane model, lane groups are used as data management units, which composed of one or several lanes on the same road section. And the lane is composed of the left lane divider, the right lane divider, the lane centerline, and lane attributes. Through the lane topology construction on some scenes where lane number changes, the validity and robustness of the lane model are verified. In terms of the data model of traffic facilities, the road marking model and traffic sign model are defined. The geometry, type, semantic information, and association with roads and lanes are the main contents described. Finally, through large-scale data production, compilation to NDS and OpenDRIVE format, and electronic horizon application experiments based on the ADASIS V3 protocol, the usefulness and effectiveness of the data model are verified. Whu map model can be used as a general exchange format, and can also be applied to all stages of HD map production. At the same time, it is backward compatible and easy to be extended. All of these help to realize the standardization of the HD map data model, thereby promoting the large-scale production and application of HD maps.
    Key technologies of multi-agent collaborative high definition map construc-tion
    CHEN Long, LIU Kunhua, ZHOU Baoding, LI Qingquan
    2021, 50(11):  1447-1456.  doi:10.11947/j.AGCS.2021.20210259
    Asbtract ( )   HTML ( )   PDF (3932KB) ( )  
    References | Related Articles | Metrics
    The higher the automatic driving degree, the higher requirements of a high definition map. Intelligent high definition maps can provide information for L5 level autonomous vehicles, which is an important direction for the development of high definition maps in the future. According to the current high definition map construction methods, the definition of multi-agent collaborative intelligent high definition map construction is proposed. Its construction framework and key technologies: multi-agent routing for data collection, multi-source heterogeneous integration data fusion and expression, road scene cognition, intelligent high definition map fusion, intelligent high-definition update are studied, and their corresponding appropriate technical schemes are proposed. Besides, the challenges in the further construction process are analyzed.
    Progresses and prospects of environment perception and navigation for deep space exploration rovers
    DI Kaichang, WANG Jia, XING Yan, LIU Zhaoqin, WAN Wenhui, PENG Man, WANG Yexin, LIU Bin, YU Tianyi, LI Lichun, LIU Chuankai
    2021, 50(11):  1457-1468.  doi:10.11947/j.AGCS.2021.20210290
    Asbtract ( )   HTML ( )   PDF (6312KB) ( )  
    References | Related Articles | Metrics
    Environment perception and navigation are the core technologies for automated driving of deep space exploration rovers. Due to the special circumstances of the deep space environment, the rover’s environment perception and navigation are particularly challenging comparing to the corresponding technologies in autonomous car driving on Earth. The paper presents a review of the progresses of environment perception, rover localization and path planning, from engineering application and scientific research perspectives. Moreover, the future prospects of intelligent environment perception and long-range rover navigation are discussed.
    Fine-grained traffic information prediction at the turning-level based on low-frequency GNSS trajectory data
    FANG Mengyuan, TANG Luliang, YANG Xue, HU Chun
    2021, 50(11):  1469-1477.  doi:10.11947/j.AGCS.2021.20210252
    Asbtract ( )   HTML ( )   PDF (4393KB) ( )  
    References | Related Articles | Metrics
    The floating-car GNSS trajectory data have been widely used to obtain and predict the urban traffic status in real time, with wide coverage and low deployment cost, and the result has an important supporting role for route decision-making of automatic driving and traffic management. However, the traffic information predicted by the floating-car GNSS data only contains the traffic information on each road segment, ignoring the difference of the traffic flow in different driving directions at the intersection; besides, the accuracy of the traffic information is limited by the GNSS sampling frequency. This paper proposes a turning-level traffic prediction method based on graph convolutional network and low-frequency GNSS trajectory data: first, a queuing-starting-point estimation model is proposed considering vehicle movement pattern; second, a Graph-structure of the turning connection relationship is constructed based on the dual graph theory; finally, to consider spatio-temporal pattern, a traffic prediction model is constructed based on the graph convolutional network. The experimental results show that our method can accurately obtain and predict the traffic speed and queue length at turning-level, and effectively improve the accuracy of traffic prediction by learning the spatio-temporal pattern within the Graph.
    Visual odometry optimizing bounded with semantic elements association in dynamic scenes
    SHAO Xiaohang, WU Hangbin, LIU Chun, CHEN Chen, CAI Tianchi, CHENG Fanjin
    2021, 50(11):  1478-1486.  doi:10.11947/j.AGCS.2021.20210308
    Asbtract ( )   HTML ( )   PDF (14291KB) ( )  
    References | Related Articles | Metrics
    Cameras can help to make low-cost positioning and environment perceiving in autopilot but dynamic objects give negative effects to visual odometry. This paper gives a model to optimize it based on semantic elements association. It uses such techniques as objects detection and semantic segmentation to identify semantic objects and distinguish dynamic semantic elements(DSE) from static ones. Then it filters bad keypoints by a dynamic feature mask in visual positioning. In practice, this proposed method detects DSE even when there are objects with duality of moving and static. In an experiment on campus roads, negative influences were found especially when a robot turning around or a moving object crossing its camera view. It showed that average accuracy in detecting DSE was 87% and the largest difference between trajectories before and after semantic association optimizing in sub-sequences was 2.463 m. Compared to ground truth, the RMSE of proposed method dropped by 38%.
    A laser SLAM method for unmanned vehicles in point cloud degenerated tunnel environments
    LI Shuaixin, LI Jiuren, TIAN Bin, CHEN Long, WANG Li, LI Guangyun
    2021, 50(11):  1487-1499.  doi:10.11947/j.AGCS.2021.20210248
    Asbtract ( )   HTML ( )   PDF (17420KB) ( )  
    References | Related Articles | Metrics
    Laser SLAM enables to locate the vehicle itself even in an unknown environment, and to efficiently sample the three-dimensional geospatial information of the traversed environment, which has been drawn wide attention in the field of autonomous driving in recent years. To improve the accuracy and performance of the laser SLAM system in a point cloud degenerated tunnel environment, we present an intensity enhanced laser SLAM approach based on LOAM. First, we improve the feature extraction module of LOAM. An adaptive feature extraction method based on spherical projection image is presented to extract line, façde, ground and reflectors from a single laser sweep. Besides, to solve the issue on point cloud registration degeneracy in tunnel environments, we presented intensity feature-basedregistration approach to fix the vehicle pose resulting from the geometric feature-based registration error. Reflecting features in the surrounding are adaptively extracted to ensure the adaptivity of our improved laser SLAM approach. The experimental results show that the proposed method presented the better and more robust performance especially in degenerated environments, e.g., long straight tunnel, comparing with the performance of LOAM and HDL-Graph-SLAM. The accuracy of the proposed method was an order of magnitude larger than that of LOAM and HDL-Graph-SLAM.
    A multi-feature approach for modeling visual saliency of dynamic scene for intelligent driving
    ZHAN Zhicheng, DONG Weihua
    2021, 50(11):  1500-1511.  doi:10.11947/j.AGCS.2021.20210266
    Asbtract ( )   HTML ( )   PDF (7999KB) ( )  
    References | Related Articles | Metrics
    Visual saliency modeling of driving scenes is an important research direction in intelligent driving, especially in the areas of assisted driving and automatic driving. The existing visual saliency modeling methods for static and virtual scenes cannot adapt to the real-time, dynamic and task-driven characteristics of road scenes in real driving environments. Building a visual saliency model of dynamic road scenes in real driving environments is a challenge for current research. Starting from the characteristics of driving environment and driver’s visual cognitive law, this paper extracts low-level visual features, high-level visual features and dynamic visual features of road scenes, and combines two influencing factors of speed and road curvature to build a visual saliency calculation model of driving scenes based on logistic regression model (LR). In this paper, the AUC value is used to evaluate the model, and the results showed an accuracy of 90.43%, which is significant advantage over traditional algorithms.
    Localization method of mobile robot based on binocular vision and inertial navigation
    XU Zhibin, LI Hongwei, ZHANG Bin, XIAO Zhiyuan, DENG Chen
    2021, 50(11):  1512-1521.  doi:10.11947/j.AGCS.2021.20210250
    Asbtract ( )   HTML ( )   PDF (5096KB) ( )  
    References | Related Articles | Metrics
    To improve the positioning accuracy of mobile robot, a visual SLAM algorithm based on binocular vision and inertial navigation is presented. In the front part of visual SLAM, a semi-direct binocular visual odometer combining direct method with characteristic method is presented to maintain the fast calculation speed and high accuracy of direct method. In the back-end optimization stage, the visual data and IMU data are fused together, and error functions are constructed in a sliding window in a non-linear optimization way to optimize the accuracy of pose calculation. The algorithm proposed in this paper is validated in EuRoc dataset. The results show that the positioning accuracy of the SLAM system OKVIS, ROVIO and VINS-Mono is significantly improved in both Machine Hall and Vicon Room scenarios, while maintaining high operational efficiency, compared with the open source visual inertial navigation fusion SLAM system OKVIS, ROVIO and VINS-Mono.
    Vehicle tracking enhancement based on the lane orientation priori from digital maps
    ZHUANG Hanyang, WANG Xiaoliang, WANG Chunxiang, YANG Ming
    2021, 50(11):  1522-1533.  doi:10.11947/j.AGCS.2021.20210258
    Asbtract ( )   HTML ( )   PDF (9354KB) ( )  
    References | Related Articles | Metrics
    Vehicle tracking aims at estimating the target vehicle state from continuous temporal measurement. It is the core for intelligent vehicle to understand the environment and predict the targets’ behaviors. LiDAR-based perception system of intelligent vehicle provides precise vehicle detection results, which are the basis of vehicle tracking. However, the tracking process suffers from issues of orientation mis-estimation and low stability of tracking trace, especially when the target vehicles are far away from LiDAR. The sparseness of point cloud at long distance is the key problem. Therefore, this paper proposes an enhanced vehicle tracking method based on the lane orientation priori from a digital map. It utilizes the OpenStreetMap digital map to fuse with the local lane markers detection results. The road model is built to obtain the constraint of lane orientation. Based on the vehicle tracking method built on extended Kalman filter, this lane orientation constraint is utilized to improve the vehicle orientation estimation. Consequently, the vehicle tracking accuracy and stability can then be reached. The tracking result indicates the multiple objects tracking accuracy can be increased by 0.33% while the average translation error can be reduced by at least 0.014 m. Moreover, the target vehicle at 60 m away from the host vehicle can be improved to reduce the error of 0.08 m.
    A road curb points extraction algorithm combined spatial features and measuring distance
    XU Dong, LIU Jingbin, HUA Xianghong, TAO Wuyong
    2021, 50(11):  1534-1545.  doi:10.11947/j.AGCS.2021.20210244
    Asbtract ( )   HTML ( )   PDF (12292KB) ( )  
    References | Related Articles | Metrics
    Extracting accurate road curb is a crucial task for driverless vehicles. However, existing road curb points extraction methods are not robust for sparse 16-ray LiDAR data. This paper presents a road curb points extraction algorithm that combines multiscale spatial features and measuring distance. The points outside the road areas are firstly removed by adopting the random sample consensus (RANSAC) algorithm, then most of the road surface points are removed by judging the horizontal and vertical continuity between points in the same laser beam. According to the measurement model of the road curb points, if the measured distance of the reserved points is within a reasonable distance and the angle between the two horizontal vectors starting with the point is larger than a certain threshold, the point will be identified as the road curb point. Experiments show that the road curb points extraction method proposed in this paper performs better than the other methods under 16-ray LiDAR dataset and meets the real-time requirements for environmental perception of driverless vehicles.
    Road intersection recognition based on a multi-level fusion of vehicle trajectory and remote sensing image
    LI Yali, XIANG Longgang, ZHANG Caili, WU Huayi, GONG Jianya
    2021, 50(11):  1546-1557.  doi:10.11947/j.AGCS.2021.20210255
    Asbtract ( )   HTML ( )   PDF (12239KB) ( )  
    References | Related Articles | Metrics
    Road intersections are important components of a road network, which are not only numerous and diverse in shape, but also complex in structure and different in size. It is difficult to recognize comprehensive and accurate road junctions based on single data source, as its limited describe information. To this end, this paper designs a multiple integration method to identify road intersections from vehicle trajectories and remote sensing images. Firstly, based on the unsupervised idea, a method combining morphological processing, density peak clustering and tensor voting is proposed to extract the seed intersections, which is regarded as a small sample set. Based on it two intersection classifiers based on deep convolution network and oriented to vehicle trajectories and remote sensing images are constructed by using collaborative training mechanism, and finally, the advantages of the two models are combined to form an integrated classification model of road intersections. In this paper, a semi supervised intersection extraction technology is proposed by fusing the complementary description features of vehicle trajectories and remote sensing images on multiple levels, which can effectively identify complex and diverse road intersections without manual labeling. Experiments based on Wuhan taxi trajectories and remote sensing images show that the accuracy of this method is more than 93% and the recall rate is 87% without manually labeled samples.
    A joint network of point cloud and multiple views for roadside objects recognition from mobile laser point clouds
    FANG Lina, SHEN Guixi, YOU Zhilong, GUO Yingya, FU Huasheng, ZHAO Zhiyuan, CHEN Chongcheng
    2021, 50(11):  1558-1573.  doi:10.11947/j.AGCS.2021.20210246
    Asbtract ( )   HTML ( )   PDF (14028KB) ( )  
    References | Related Articles | Metrics
    Accurately identifying roadside objects like trees, cars, and traffic poles from mobile LiDAR point clouds is of great significance for some applications such as intelligent traffic systems, navigation and location services, autonomous driving, and high precision map. In the paper, we proposed a point-group-view network (PGVNet) to classify the roadside objects into trees, cars, traffic poles, and others, which utilize and fuse the advanced global features of multi-view images and the spatial geometry information of point cloud. To reduce redundant information between similar views and highlight salient view features, the PGVNet model employs a hierarchical view-group-shape architecture to split all views into different groups according to their discriminative level, which uses the pre-trained VGG network as the bone network. In view-group-shape architecture, global-level significant features are further generated from group descriptors with their weights. Moreover, an attention-guided fusion network is used to fuse the global features from multi-view images and local geometric features from point clouds. In particular, the global advanced features from multi-view images are quantified and leveraged as the attention mask to further refine the intrinsic correlation and discriminability of the local geometric features from point clouds, which contributions to recognize the roadside objects. We have evaluated the proposed method on five different mobile LiDAR point cloud data. Five test datasets of different urban scenes by different mobile laser scanning systems are used to evaluate the validities of the proposed method. Four accuracy evaluation metrics precision, recall, quality and Fscore of trees, cars and traffic poles on the selected testing datasets achieve (99.19%,94.27%,93.58%,96.63%),(94.20%,97.56%,92.02%,95.68%),(91.48%,98.61%,90.39%,94.87%), respectively. Experimental results and comparisons with state-of-the-art methods demonstrate that the PGVNet model is available to effectively identify roadside objects from the mobile LiDAR point cloud, which can provide data support for elements construction and vectorization in high precision map applications.
    Visual map from around view system for intelligent vehicle localization in underground parking lots
    ZHOU Zhe, HU Zhaozheng, LI Na, XIAO Hanbiao, WU Jinxiang
    2021, 50(11):  1574-1584.  doi:10.11947/j.AGCS.2021.20210205
    Asbtract ( )   HTML ( )   PDF (10303KB) ( )  
    References | Related Articles | Metrics
    In view of the lack of GPS signal in the underground parking lots, the second-order Markov model and particle filter (MM-PF) method for intelligent vehicle localization in underground parking lots is proposed based on the construction of a visual feature map from around view. In the method, the nodes of the visual feature map are defined as particles, while the query images are defined as observation data. In the process of state transition, the second-order Markov model is introduced to model the motion of the vehicles in a short time. In addition, holistic features are employed to establish the matching relationship between the query image and each particle (visual feature map node) by assigning particle weights based on Hamming distance. In the experiments, two typical underground parking lots are selected to verify the method. In both scenarios, the mean error of localization is less than 0.38 m, the mean square error is less than 0.29 m. The probability of positioning error below 1 m is not less than 95.4%. Experimental results demonstrate that the proposed method can integrate both the motion and visual features to enhance localization performance. Experimental results also show that the proposed method outperforms state-of-the-art ones in terms of localization accuracy and robustness.
    A tightly coupled SLAM method for precise urban mapping
    SUN Xiliang, GUAN Hongcan, SU Yanjun, XU Guangcai, GUO Qinghua
    2021, 50(11):  1585-1593.  doi:10.11947/j.AGCS.2021.20210243
    Asbtract ( )   HTML ( )   PDF (10010KB) ( )  
    References | Related Articles | Metrics
    Aiming to reduce the cumulative error and improve the robustness of SLAM system in accurate urban mapping, a tightly coupled laser SLAM algorithm that combined LiDAR, inertial measurement unit (IMU), and global navigation satellite system (GNSS) was developed. The proposed method achieved high accuracy point cloud registration by adding pole-like and plane features that reduced cumulative errors in SLAM. In addition, a GNSS corner-based constraint was used to improve the accuracy of the global map construction. This study compared the proposed method with three mainstream SLAM methods (i.e., LOAM, LeGO-LOAM, and LIO-SAM) in four common urban scenes (i.e., open park, underground garage, urban park, and road). The test results showed that LOAM and LeGO-LOAM have poor stabilities in complex urban scenes. The LIO-SAM and proposed method have successfully realized the mapping of all scenes. Compared to LIO-SAM, the absolute position error (APE) of the proposed method has improved by 32.25% without the GNSS position factor and has improved by 92.03% with the GNSS position factor (APE<10 cm). Moreover, the absolute coordinate error of generated point cloud by the proposed method in the open park scene was less than 5 cm, which demonstrates the proposed method can fulfill the requirements of centimeter-level urban mapping.
    Localization initialization for multi-beam LiDAR considering indoor scene feature
    SHI Pengcheng, YE Qin, ZHANG Shaoming, DENG Haifeng
    2021, 50(11):  1594-1604.  doi:10.11947/j.AGCS.2021.20210268
    Asbtract ( )   HTML ( )   PDF (6228KB) ( )  
    References | Related Articles | Metrics
    For the problem of localization initialization (LI) of robot in indoor large-scale scene, a localization initialization method based on feature pattern is proposed. Firstly, with feature analysis of indoor scene structure, the proposed method explores robust man-made structures (e.g., walls, columns and some other structures with spatial location indication function), which are defined as feature patterns to improve robustness of scene feature expression. Then, with characteristics of multi-beam light detection and ranging (LiDAR) point cloud, a feature pattern extraction method in real-time data is proposed to improve efficiency of feature expression with a hierarchical management. Next, a semi-automatic processing method is proposed to extract feature patterns from point cloud map, and an efficient data management pipeline is designed to avoid repeatedly redundant operations on map data during multiple times initialization to improve efficiency of LI. Finally, two kinds of error equations are constructed for different feature patterns. With L-M gradient descent solution and hit ratio of map grid as metric, an adaptive matching and registration strategy is proposed to accomplish LI of robot in large-scale indoor scene. In order to verify feasibility of this method, a low-cost 16-line LiDAR was used in the experiment in three typical indoor scenes i.e., corridor, hall and underground parking lot. The experimental results show that LI is accomplished quickly and accurately with proposed method in large-scale indoor scene, and it basically meets the localization accuracy and efficiency requirements of indoor robot in practical application.
    Research on key frame image processing of semantic SLAM based on deep learning
    DENG Chen, LI Hongwei, ZHANG Bin, XU Zhibin, XIAO Zhiyuan
    2021, 50(11):  1605-1616.  doi:10.11947/j.AGCS.2021.20210251
    Asbtract ( )   HTML ( )   PDF (12774KB) ( )  
    References | Related Articles | Metrics
    Simultaneous localization and mapping (SLAM) has broad application prospects in many fields due to its high energy efficiency and low power consumption. However, there are still some problems in the traditional SLAM system: the key frame in the traditional visual odometer does not contain semantic information, the image information obtained by the mobile robot is relatively single, and the key frame in the actual scene always contains a large number of mismatched points and dynamic points. In response to the above problems, this paper proposes a new idea of semantic SLAM mapping technology. First, in order to find the correct and corresponding feature points, abandon the interference of dynamic points and mismatched points simultaneously, a method for judging the feature state of adjacent frames based on the Lucas-Kanade optical flow method is proposed, and this function is regarded as a new feature. The thread is added to the visual odometry part of ORB-SLAM3 to complete the optimization and improvement of part of the traditional SLAM framework. Secondly, in view of the problem that the image frame obtained by the front-end visual odometer of the traditional SLAM system does not contain any semantic information, the target detection algorithm based on YOLOV4 and the Mask R-CNN semantic segmentation algorithm fused with fully connected conditional random field CRF are used to compare ORB-SLAM3. The key frame image processing of the robot effectively improves the perception of the indoor environment of smart devices such as robots.
    A real-time map matching method for road network using driving scenario classification
    FU Chen, HUANG Shengke, TANG Yan, WU Hangbin, LIU Chun, YAO Lianbi, HUANG Wei
    2021, 50(11):  1617-1627.  doi:10.11947/j.AGCS.2021.20210261
    Asbtract ( )   HTML ( )   PDF (2461KB) ( )  
    References | Related Articles | Metrics
    Real-time map matching plays a critical role in intelligent transportation and autonomous driving. For complex road networks like elevated roads and overpasses, existing real-time matching algorithms have relatively lower accuracy due to the interference of parallel roads. Thus, a real-time map matching method combined with driving image classification is proposed. When the vehicle nears the elevated roads, the current trajectory point is matched by combining the scenario classification result with the vehicle’s heading direction, the distance to the road segment, and the adjacency with the previous matching segment. For the experiment, three trajectories with high GNSS sampling rates were collected in Shanghai. Three indicators (match rate, recall, and precision) are used to evaluate the matching performance. The results show that the average matching rate, recall, and precision of the proposed method are 96.86%, 97.17%, 93.46%, which outperform the traditional real-time matching methods. As the sampling interval increases, the proposed method still performs well with three indicators. Comparing the matching results in complex areas such as elevated roads and intersections, as well as comparing the matching time, latency and memory consumption, this method can maintain good matching results.
    Road slope real-time detection for unmanned truck in surface mine
    MENG Dejiang, TIAN Bin, CAI Feng, GAO Yijun, CHEN Long
    2021, 50(11):  1628-1638.  doi:10.11947/j.AGCS.2021.20210242
    Asbtract ( )   HTML ( )   PDF (15586KB) ( )  
    References | Related Articles | Metrics
    Given the steepness of road slope in surface mines, one unmanned truck may run at risk in unknown environment if it can’t plan a proper speed in advance. Therefore, it is crucial for an autonomous vehicle to perceive an accurate value of road slope of it in real time. However, the accuracy is hard to achieve in existing methods, including global navigation satellite system (GNSS), inertial navigation system (INS) and simultaneous localization and mapping (SLAM). For GNSS or INS, they can measure a truck’s angle of pitch, but this angle cannot be equal to road slope due to its large pitching and bouncing movements caused by steep and uneven roads. For the same reason, SLAM doesn’t work well either. Also, it will lose efficacy as geometric features are not obvious in open-pit mine. To deal with these challenges, this paper proposes a grid Kalman road slope real-time detection (GKSRD) method. The method’s input is 3D point cloud of lidar and the pitch of INS. And the method uses a 2D grid map, an iterative optimization algorithm in rectangular region of interest and the Kalman filter. Compared with methods based on GNSS or INS, this method minimizes the error of slope detection. Different from methods based on SLAM, this method does not rely on geometric features of the environment. Verified by experiments, the average error of the road slope detected by GKSRD method is less than 0.01 degree, and the maximum error is less than 0.5 degree. Hence, compared with methods based on INS or GNSS and methods based on SLAM, GKSRD method is more accurate, stable and adaptive.