diff --git a/README.md b/README.md
index e7b078ca27fb09d36f18e48f65dd54a47d30c579..0a73262f8664dea738aa19274fa1ccd24c85ecf6 100644
--- a/README.md
+++ b/README.md
@@ -1,3 +1,37 @@
+# deepSTABp 
+
+ARC for the paper DeepSTABp: A deep learning approach for the prediction of thermal protein stability. deepSTABp an protein melting temperature predictor and was developt to overcome the limitations of classical experimental apporches, which are expensive, labor-intensive, and have limited proteome and species coverarge. 
+
+DeepSTABp uses a transformer-based Protein Language model for sequence embedding and state-of-the-art feature extraction in combination with other deep learning techniques for end-to-end protein Tm prediction.
+
+## Usage
+
+deepSTAPp can either used directly at:
+
+An alternative is to clone this ARC and to run it locally.
+
+### Setup environment
+
+You can create a conda enviroment directly from the yml file located in /workflows
+
+```
+conda env create -f environment.yml
+```
+
+### Running deepSTABp
+
+Afterwards you can use the predict_main.py in workflows/TransformerBasedTMPrediction/prediction_model. You simply have to replace the example fasta sequence with a path to your .fasta file and run the predict_main.py file.
+
+### Training of your own model
+
+In case you want to retrain deepSTABp simply run the training.py file located in workflows/TransformerBasedTMPrediction/MLP_training.py . You can also experiment with different architectures by directly editing the modelstructure found in the MLP_training.py file.
+The other file found in workflows/TransformerBasedTMPrediction/ is named tuning.py and you should run it after training with your already pretrained model to achiv optimal results.
+
+### Using the datafiles
+
+All datafiles that were used to train deepSTABp can be found in the /runs/TransformerBasedTMPrediction/Datasets folder. The folder base has the compelete dataset, the folder training, testing, and validation have the sampled datasets that emerged from the base dataset. The datasets are availabe in csv and parqut format. 
+The base folder has multiple different datasets. The one used for deepSTABp is the human_PCT dataset.
+
 # ARCs
 
 for details, see <https://github.com/nfdi4plants/ARC-specification>
diff --git a/workflows/TransformerBasedTMPrediction/MLP_training/training_growth_new_architecture.py b/workflows/TransformerBasedTMPrediction/MLP_training/training.py
similarity index 100%
rename from workflows/TransformerBasedTMPrediction/MLP_training/training_growth_new_architecture.py
rename to workflows/TransformerBasedTMPrediction/MLP_training/training.py
diff --git a/workflows/TransformerBasedTMPrediction/MLP_training/training_growth_new_architecture_tuning.py b/workflows/TransformerBasedTMPrediction/MLP_training/tuning.py
similarity index 100%
rename from workflows/TransformerBasedTMPrediction/MLP_training/training_growth_new_architecture_tuning.py
rename to workflows/TransformerBasedTMPrediction/MLP_training/tuning.py
diff --git a/workflows/train_env.yml b/workflows/environment.yml
similarity index 100%
rename from workflows/train_env.yml
rename to workflows/environment.yml