diff --git a/project/README.md b/project/README.md
index 73d0590d31616bc928df8b53547d51a8a0f79a05..c74b108bf5a1accd16de42c4bf8ee56e5303f076 100644
--- a/project/README.md
+++ b/project/README.md
@@ -52,7 +52,7 @@ In our project, we have focused on **30** specific **classes** out of the 262 av
 The original dataset lacks a predefined split into training, development (validation), and testing sets. To tailor our dataset for effective model training and evaluation, we implemented a custom script that methodically divides the dataset into specific proportions.
 
 <figure>
-<img  align="left" src="figures/dataset_split.png" alt= "Dataset Split" width="40%" height="auto">
+<img  align="left" src="figures/dataset_split.png" alt= "Dataset Split" width="45%" height="auto">
 </figure>
 
 
@@ -111,7 +111,8 @@ The features we can use for training our models are always based on the **pixel
 
 In the dummy example below, there is a 3x3 pixel image. Each pixel has three values: intensity of red, green and blue (**RGB**).
 
-<img align="center" width="100%" height="auto" src="figures/image_features.png" title="test">
+
+<img align="right" width="20%" height="auto" src="figures/image_features.png">
 
 By concatenating the color values of one pixel and then concatenating the pixels, we can represent the image as a 1D vector. The length of the vector is then equal to the number of pixels in the image multiplied by 3 (RGB).