From 7a9ca316f6b4f5a79491b25ca3bfd42132ffa2a6 Mon Sep 17 00:00:00 2001
From: igraf <igraf@cl.uni-heidelberg.de>
Date: Fri, 23 Feb 2024 22:18:20 +0100
Subject: [PATCH] Update challenge part

---
 project/README.md | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/project/README.md b/project/README.md
index 7368616..1708e6e 100644
--- a/project/README.md
+++ b/project/README.md
@@ -545,13 +545,15 @@ We have also tested training a Random Forest model and a CNN model on a reduced
 
 ## Challenges & Solutions
 
-One significant challenge we faced in our fruit image classification project was the need for substantial computational resources. This was mainly due to the intensive nature of training deep learning models, especially when dealing with a dataset that, while not overly large, was complex enough to require advanced processing.
+One significant challenge we faced in our fruit image classification project was the need for **substantial computational resources**. This was mainly due to the intensive nature of training deep learning models, especially when dealing with a dataset that, while not overly large, was complex enough to require advanced processing.
 
-To effectively manage this challenge, we utilized the BWUniCluster and the CoLiCluster for our computational needs. These high-performance computing clusters provided us with the necessary power to train our models efficiently. By leveraging these resources, we were able to conduct extensive training and experimentation with our models, which would have been considerably slower or even impractical with standard computing setups.
+To effectively manage this challenge, we utilized the *BWUniCluster* and the *CoLiCluster* for our computational needs. These high-performance computing clusters provided us with the necessary power to train our CNN models efficiently. By leveraging these resources, we were able to conduct extensive training and experimentation with our models, which would have been considerably slower or even impractical with standard computing setups.
 
 This approach not only expedited our training process but also allowed us to explore and refine our models to a greater extent, leading to more robust and accurate classification results. It highlights the importance of having access to appropriate computational resources in handling sophisticated machine learning tasks, even when the dataset size is not exceedingly large.
 
-Moreover, we recognize that the task itself is inherently challenging due to the nature of our (party self choosen) dataset. Many fruits look remarkably similar to each other, and the variability in their appearance, like different stages of ripeness or with/without peeling, adds another layer of complexity to the classification task. These factors make the project not just a test of our technical skills but also an exploration into the intricate world of image recognition and classification.
+For the basic classifiers, though taking several hours, standard computing resources were sufficient to conduct the experiments. Nevertheless, running experiments in parallel using `screen` sessions was a new and efficient way to speed up the process and to run multiple experiments at the same time.
+
+Moreover, we recognize that the task itself is inherently challenging due to the nature of our (partly self choosen) dataset. Many fruits look remarkably similar to each other, and the variability in their appearance, like different stages of ripeness or with/without peeling, adds another layer of **complexity** to the **classification task**. These factors make the project not just a test of our technical skills but also an exploration into the intricate world of image recognition and classification.
 
 ## Conclusion
 
-- 
GitLab