End-to-end learning using CARLA Simulator

Imtiaz Ul Hassan
4 min readAug 31, 2022

End-to-end learning refers to using a single system/model to perform complex tasks instead of breaking them into smaller simple tasks. One such complex task is driving. We, humans, are born with state-of-the-art capabilities when it comes to vision and learning complex tasks. Therefore it might seem simple when we look at it as humans, but we only realize the complexity when we have to build a system to carry out the same task.

In this blog, we will try to demonstrate how end-to-end learning can be used to solve the driving problem by using convolutional Neural Networks. We will try to build a system to predict steering values required to drive, based on the given image of the roads. We will use Carla with Carla-Ros-Bridge to carry out the simulation. Let's start. I have already installed Carla and Carla-Ros-Bridge. So in order to follow this article make sure you have Carla-Ros-Bridge installed.

First of all, run open terminal cd into the directory where Carla is installed and run the Carla server using the following commands.

./CarlaUE4.sh -windowed -ResX=320 ResY=240 -bencmark -fps=10 -quality-level=Low

This will open Carla Server as follows

Carla server

Now Carla is running as a server let's run Carla-ros-bridge. Open new terminal. Cd to the following location

<yourhome>carla-ros-bridge/catkin_ws/src/ros bridge/carla_ros_bridge/launch/

Next run the following Command

source ~/carla-ros-bridge/catkin_ws/devel/setup.bash

Now run carla_ros_bridge_with_example_ego_vehicle.launch as follows

roslaunch ./carla_ros_bridge_with_example_ego_vehicle.launch

This will launch carla-ros-bridge as client which will be connected to Carla server. An ego vehicle with sensor attached to it has been launched which can be seen pygame window and carla server.

Pygame window
Carla sever window

you can change the type of vehicle its spawn position, sensors type and their position by editing the following json file

<your home>carla-ros-bridge/catkin_ws/src/ros-bridge/carla_spawn_objects/config/objects.json

Now we have spawned our tesla model3 we can see the topics on which can check the nodes and the topics on which different information is being published and subscribed by Carla-Ros-bridge and Carla agent

running “rostopic list” we get the following topic list

For this work, we only need the RGB images and corresponding images. Ros provides rosbags which can be used to record data published by any node. Before recording the data let's start driving our car in autopilot mode. (Note the autopilot mode is kind of scripted as it has access to everything from other objects to road networks and everything). Open the pygame window and press b and the p. Now your tesla model3 will start driving automatically around the town.

Now to visualize the vehicle control values enter the following command

rostopic echo /carla/ego_vehicle/vehicle_status

content of ego_vehicle status

As we can see that the topic “ /carla/ego_vehicle/vehicle_status “ returns different information about our vehicle. For this project we only need the steer values which I have highlighted in the picture.

Beside the steer value we also need the corresponding road image for steer value which is available in the topic “ /carla/ego_vehicle/rgb_front/image”. Now ros provides rosbag to record the values published by any topic. For this project we will record ego_vehile status and rgb/front/images. Enter following command to record images and steer values.

rosbag record /carla/ego_vehicle/rgb_front/image /carla/ego_vehicle/vehicle_status

This creates a bag file containing records of Both topics which are RGB front camera and Ego vehicle status. Next, we will extract images and steer values for each frame from this bag file.

The following script can be used to extract images and corresponding image values from the bag file.

Running the above scripts extracts images and corresponding steer values.

A sample of the picture which we have extracted. And steer value for the given picture is -0.001229000277817249298

sample image

Now our data collection part is complete. The next part is to train a Neural network on this data. You can find the collected dataset here.

Note in order to get a good more diverse dataset you can change the weather, town, and time of day in the CARLA simulator. The next part of this article will be uploaded soon. Stay tuned

--

--