diff --git a/README.md b/README.md
index b0c6f8ff3da20948959b88595cae61f82ad3c2e1..0741f1403d20baff469f3895407ed12c32f042a9 100644
--- a/README.md
+++ b/README.md
@@ -21,51 +21,51 @@ For more details please read our project report.
 
 # Project structure
 
-    |_ cli.py
-    |	 The command line interface for our project. Imports and runs the files in scripts/. See Usage for more info.
-    |
-    |_ data/
-    |    All necessary inputs and the generated outputs.
-    |	 |
-    |	 |_ cora
-    |    |    |_embeddings - the output of EP on Cora
-    |    |    |_graph - the input for EP - a pickled networkX graph
-    |    |    |_models - the saved TF models; can be used to save the embeddings after 40 epochs, for instance
-    |    |    |_raw - the raw Cora data, input for the Cora preprocessing and the node classification
-    |    |    |_summaries - the TF loss summaries for the two Cora label types; used to produce the figures in the report
-    |    |
-    |    |_ other - files that don't belong to a particular dataset
-    |    |
-    |    |_ senseval2/senseval3
-    |    |    |_processed - the processed raw S2 or S3 data; input for the WSD
-    |    |    |_raw - the raw S2 or S3 data
-    |    |    |_wsd-answers - outputs of the WSD on the S2 or S3 data
-    |    |
-    |	 |_ wordnet
-    |         |_embeddings - the output of EP on WordNet
-    |         |_graph - the input for EP - a pickled networkX graph
-    |         |_models - the saved TF models; can be used to save the embeddings after 50 epochs, for instance
-    |         |_raw - the raw WN data, input for the preprocessing scripts
-    |         |_summaries - the TF loss summaries for the five WN label types; used to produce the figures in the report
-    |         |_mappings - various mappings for the synset IDs, lemmata, WN3.0->WN1.7 etc.; used in the WSD
-    |
-    |_ __init__.py
-    |
-    |_ README.md
-    |    This file.
-    |
-    |_ requirements.txt
-    |      An image of the virtualenv. -> pip install -r requirements.txt
-    |
-    |_ scripts/
-    |    Python scripts for EP, preprocessing, node classification and WSD. 
-    |    |
-    |    |_ embedding_propagation - the EP algorithm
-    |    |_ node_classification - the NC experiment on the Cora dataset
-    |    |_ preprocessing - preprocessing scripts for Cora, SensEval, and WordNet. Not part of the CLI, so partly with aux. files.
-    |    |_ scoring - the official S2 and S3 All Words Task scorer for the WSD
-    |    |_ wsd - the two WSD methods
-    |
+	|_ cli.py
+	|	 The command line interface for our project. Imports and runs the files in scripts/. See Usage for more info.
+	|
+	|_ data/
+    	|    All necessary inputs and the generated outputs.
+	|	|
+	|	|_ cora/
+	|	|    |_embeddings/ - the output of EP on Cora
+	|	|    |_graph/ - the input for EP - a pickled networkX graph
+	|	|    |_models/ - the saved TF models; can be used to save the embeddings after 40 epochs, for instance
+	|	|    |_raw/ - the raw Cora data, input for the Cora preprocessing and the node classification
+	|	|    |_summaries/ - the TF loss summaries for the two Cora label types; used to produce the figures in the report
+	|	|	
+	|    	|_ other/ - files that don't belong to a particular dataset
+	|	|
+	|	|_ senseval2/senseval3
+	|	|    |_processed/ - the processed raw S2 or S3 data; input for the WSD
+	|	|    |_raw/ - the raw S2 or S3 data
+	|	|    |_wsd-answers/ - outputs of the WSD on the S2 or S3 data
+	|	|
+	|	|_ wordnet/
+	|         |_embeddings/ - the output of EP on WordNet
+	|         |_graph/ - the input for EP - a pickled networkX graph
+	|         |_models/ - the saved TF models; can be used to save the embeddings after 50 epochs, for instance
+	|         |_raw/ - the raw WN data, input for the preprocessing scripts
+	|         |_summaries/ - the TF loss summaries for the five WN label types; used to produce the figures in the report
+	|         |_mappings/ - various mappings for the synset IDs, lemmata, WN3.0->WN1.7 etc.; used in the WSD
+	|
+	|_ __init__.py
+	|
+	|_ README.md
+	|    This file.
+	|
+	|_ requirements.txt
+	|      An image of the virtualenv. -> pip install -r requirements.txt
+	|
+	|_ scripts/
+	|    Python scripts for EP, preprocessing, node classification and WSD. 
+	|	|
+	|	|_ embedding_propagation/ - the EP algorithm
+	|	|_ node_classification/ - the NC experiment on the Cora dataset
+	|	|_ preprocessing/ - preprocessing scripts for Cora, SensEval, and WordNet. Not part of the CLI, so partly with aux. files.
+	|	|_ scoring/ - the official S2 and S3 All Words Task scorer for the WSD
+	|	|_ wsd/ - the two WSD methods
+    	|
     
 
 # Usage