Installing Darknet on Ubuntu 20.04

Darknet

Sema Zeynep Bulut
7 min readMay 4, 2021

Darknet is an open source neural network framework developed with C and Cuda. Supports CPU and GPU calculations. Compatible with MacOS, Windows, Linux operating systems. However, some files and settings may need to be reconfigured in order to use this framework.
This article will talk about how to install Darknet on Ubuntu 20.04. First of all, if you want to run the Darknet framework on the GPU, you must have an Nvidia driver. Then you have to install Cuda and cuDNN. I could not find a source on the internet that contains detailed information about darknet usage in Ubuntu 20.04, that’s why I decided to write this article. Most of the sources mention that Cuda 10.0 must be downloaded to use the darknet framework. But Cuda 10.0 version is not compatible with Ubuntu x86_64 20.04. The Cuda version is also of great importance in terms of compatibility with the tensorflow library when using the darknet. So what should we do? As a result of my research, I learned that in order to use Darknet successfully in 20.04, we need to install Cuda 11.1 and cuDNN version 8.0.5, which works compatible with this Cuda version. However, the version of tensorflow installed in the virtual environment that we will perform operations must be 2.4. This is the tensorflow version compatible with Cuda 11.1.

Before starting the installation, you should check if you have the nvidia driver. There are many commands in linux for this. But you can simply type to the terminal and perform a driver query.

Note: You may need to rearrange some punctuation marks in Linux. Since I type the text in Windows, you may get an error due to the difference in punctuation marks.

nvidia-smi

If you want to change the nvidia driver version in the environment you are working in, you can choose one of the nvidia driver versions under the Additional Drivers title in Software and Updates. After the selection, if you approve the change you made, the download process will start automatically. After the driver installation is finished, you must restart your server.

sudo reboot
Make sure that the version you choose is compatible with the cuda toolkit.

To check if you have a Linux version supported by the Cuda development tools, run the code in the terminal. You can find compatible versions in the Cuda Toolkit documentation.

uname -m && cat /etc/*release

The gcc compiler is required for Cuda installation. To check the compiler version:

gcc –version

Before installing the Cuda drivers, the header and some additional packages must be installed for the kernel running on the system. To do this, you can first query the version of the kernel you are working with “uname –r” and run the required code for the header accordingly.

Kernel headers ubuntu download command:

sudo apt-get install linux-headers-$(uname -r)

Cuda setup

Open the address on the link. Your choices should be as shown in the image.

https://developer.nvidia.com/cuda-11.1.0-download-archive?

Copy and paste the code on the cuda 11.1 page to the terminal to download. Note that the file command you are in is the same as the file command you downloaded from.

To allow execution:

sudo chmod +x cuda_11.1.1_455.32.00_linux.run

The numbers written after 11.1 may differ according to cuda toolkit updates. In the previous step, you can copy and paste the expression starting with cuda at the end of the download after the x.

sudo sh cuda_11.1.1_455.32.00_linux.run

While the file is running, you will see a screen with abort and continue options. Since there is a driver installed on your computer and there is a driver in the package, it asks you to continue by removing the driver on your computer. But since we will not select the driver during download, we continue with continue. You will then see the license agreement. Continue by typing Accept.

During the download, your options should be like this. Continue with Install.

We need to define some paths to the .bashrc file. This file contains commands that users want ready to run while using the terminal session on Linux distributions.

We open the file with the command “nano ~/ .bashrc”. You can add the paths that should be included in the file as follows. Do not forget to save the file without closing it.

export PATH=$PATH:/usr/local/cuda-11.1/bin

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.1/lib64

export CUDADIR=/usr/local/cuda-11.1

By running the following command in the terminal, ensure that the changes you make are read by the kernel.

source ~/.bashrc

Go to https://developer.nvidia.com/rdp/cudnn-archive and select the cuDNN version in the image. Before downloading, you must be a member of the NVIDIA Developer Program. If you do not have a membership, you can create a membership in a very short time.

After downloading, set your location on the terminal where the files are downloaded. Find the title of installation on linux at https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html.

Paste the name of the downloaded file after the command tar –xzvf. Run the codes in step 3 one by one.

Restart your computer with the sudo reboot command.

You can check your version of Cuda by running this command.

nvcc --V

I mentioned that the Cuda 10.1 version is not compatible with Ubuntu 20.04. If you had run this command before the installations, it would give an error and ask you to run the conda install cuda toolkit command because there is no Cuda compiler on your computer. Since the version that comes with this command is Cuda 10.1 version and it is not compatible with Ubuntu 20.04, you will get an error at the end of the download process.

To install a virtual environment, create a new folder and go to the folder through the terminal. Download the virtual environment for python3 with the command below.

sudo apt install python3-venv

Here venv module helps to create a virtual environment inside the tf24 folder. You can also choose a different name than tf24 as the name. You do not need to create the folder, when you run the code, the virtual environment is created in the folder name you specified.

python3 –m venv tf24

Go back to the home directory. Activate the virtual environment you created with the code below.

source ~/venv_folder_name/tf24/bin/activate

When the virtual environment is activated, you will see the name of the venv you have created in front of your username. For example: (tf24)name@name:~$

We deactivate the virtual environment with the “deactivate” command. Then we open the .bashrc file, which we introduced our paths previously, with the command:

nano ~/.bashrc

We will define a shortcut in the file to activate the tf24 virtual environment:

alias tf24=”source ~/venv_folder_name/tf24/bin/activate; echo \”Tensorflow2.4 \\w jupyter is activated.\””

With this command we have added, when we write tf24 to the terminal and press enter, the virtual environment will be activated. You can customize the place after echo. Because what we wrote after echo, we will see on the screen after the source code is successful.

Run the command source ~/.bashrc to read the changes you made in the file. Type tf24 and press enter. (Write the name with whatever name you created a virtual enviorment.)

Necessary packages must be installed in the virtual environment. And of course we have to do some checks.

pip install --upgrade pip
pip install tensorflow-gpu
python //After downloading (still venv active)
import tensorflow
exit()

After the import code runs, you should see the text “library libcudart.so.11.0” at the end of the text you will see on the screen.

pip install jupyter
pip install tqdm scikit-learn scikit-image pandas matplotlib
jupyter notebook //for the run

import tensorflow as tf
print(tf.__version__)

physical_devices = tf.config.list_physical_devices(‘GPU’)
tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)

After the Jupyter notebook is opened, you will check with the code to see if your GPU is working with Tensorflow.

If you can’t see the GPU, run the codes below.

sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))
from tensorflow.python.client import device_lib
device_lib.list_local_devices()

You can develop models by using the GPU in the tf24 virtual environment.

// install opencv!pip install opencv-pythonpip install opencv-contrib-python

You are now ready to download the darknet. Paste the command into the terminal where tf24 is active:

git clone https://github.com/AlexeyAB/darknet.git

Go to the darknet folder using the file path. Open the Makefile file in the folder. Update the values of GPU, CUDNN, OPENCV from 0 to 1.

Run the “make” command in the darknet folder.

In case you do not get any error, you can check whether you get the “usage: ./darknet <function>” result by running the “./darknet” command.

I hope you have successfully installed Darknet, thank you for reading!

--

--