Weed Detection Using YOLOV5

Imtiaz Ul Hassan
6 min readFeb 17, 2023

YOLO is a popular real-time object detection algorithm. The reason of YOLO’s popularity its speed. Unlike other object detection algorithms that perform region proposal and classification separately, YOLO performs both tasks in a single forward pass of the network, making it faster and more efficient. Hence it can be used in computer vision application which are real time . For example a self driving car might need to detect an obstacle using image from camera. Hence to avoid crash or accident this decision needs to be executed in real time.

YOLOv5 which was released in 2020 by Ultralytics is one of the variation of YOLO. In this article we will train YOLOV5 for weed detection. Weed detection technology can help farmers to precisely target weed infestations and reduce the use of herbicides, resulting in cost savings and a more sustainable farming approach.

The first step in any object detection method is data collection and annotation. After collection data you have to identify the number of classes in your data. In the data which we have collected we have 7 classes as namely karela tori bhindi horseweed herb paris grass and small weed. Some of them are as follows.

Once you have identified the classes the next step is to annotate the data. There are many tools available for data annotation online and offline. Upto my research i have found LabelImg to be better. Its lightweight and easy to use. LabelImg can be downloaded from here. The direct exe file for windows can be found here.

Once you open it LabelImg looks like the follows.

Select YOLO option as we are using YOLOV5 which supports .txt format. Other models requires the data to be in other formats. Next open the classes file and write the names of classes in order you want. The first class will be assigned the number 0 second name 1 and so .

After creating bounding box. Assign class name and save the file. Move to next one. In this annotate all the images. Once done annotation create following hierarchy of folders .

if you have 100 images then you will 80 images for training and 20 for testing. Once you have arranged your files like this upload them on your drive. I have uploaded my annotated files here.

To use my data you will have to open it and add a shortcut to your drive. The better approach will be creating a weeddetection named folder and add the shortcut there.

After creating shortcut open YOLOV5 Ultralytics official colab tutorial notebook which can be found here. After opening the notebook the next step is to mount you drive which can be done as follows

click on the mount drive icon

This will create a code snippet as follows run it. Allow this notebook to access your google drive files by clicking on Connect to Google drive.

After allowing it you will be able to access your drive files from colab environment.

The next step is to create .yaml file according to your custom dataset. In yaml file you have to describe the classes in order you annotated them. For example let’s say in your dataset you have 0 for cat and 1 for dog . You will follow this order in Yaml too. The YAML file for weed dataset is given as follows

train: relative path to the folder where train data is 
val: relative path to the folder where test data is
# Classes

nc: 7 # number of classes
names: ['herb paris', 'karela', '
small weed', 'grass', 'tori', 'horseweed', 'Bhindi'] # class names

To create this yaml file open text editor and copy the content. Now you have to add the train folder path to it. For that navigate to train path from colab environment as follows

Paste this path in front of train as follows.Do the same val. Note in the given scenario val is same as test. So you will copy path of test folder.

train: /content/drive/MyDrive/weed1000/Test

once you have created yaml file. You can give it any name ending with .yaml. In my case i have named it as weed.yaml. Now we will upload this file to either our notebook environment or drive. As follows

Upload weed.yaml file

Once you are done with it. You just have two run two cells. Which are as follows

!git clone https://github.com/ultralytics/yolov5  # clone YOLO github repository
%cd yolov5
%pip install -qr requirements.txt # install

import torch
import utils
display = utils.notebook_init() # checks

The above cell will clone Ultralytics Yolov5 repo to your notebook and install requirements. The as we have already created yaml file the next step is to start the training.

!python train.py — img 640 — batch 16 — epochs 3 — data <pathto weed.yamlfile> — weights yolov5s.pt — cache

Note next to data you have to paste the path to weed.yaml file which we have created. so for my case the code will be as follows

!python train.py — img 640 — batch 16 — epochs 3 — data /content/weed.yaml  — weights yolov5s.pt — cache

Also next to weight we have written yolov5s.pt this means yolov5 small version. There are more versions , which include the followings

yolov5n nano being the lightest to yolov5l being the largest one. There is always a trade off between these models. As speed increases accuracy of the model decreases. Now when you will run the the above cell training will start as follows.

Once th trainig is finished you can see different training metrics in the folder runs/train

The weight folder contains the train model. To increase the perfomance you can increase epochs and datasize. Now to make a prediction on a video or an image upload the video or image and run the following code.

!python detect.py --weights <path to trained model> --img 640 --conf 0.25 --source <path to video or image> >

remember in the next to weight you will give the path of trained model and after the source you will write the path of video on which you want to perform detection using your trained model. Note download your trained model so you can use it for deployement or use.

!python detect.py --weights /content/yolov5/runs/train/exp5/weights/best.pt --img 640 --conf 0.25 --source /content/weed13858.jpg

Once you run this the output will be saved in the following folder

/content/yolov5/runs/detect

So that’s all of it. By using the same procedure you can train YOLOV5 object detection for any object of your choice.

--

--