+ "##### This notebook represents the code for the article, \"Glenohumeral Joint Force Prediction with Deep Learning\", to predict patient-specific glenohumeral joint (GH) joint force (the reaction force on the glenoid) with a deep learning model (DLM). The DLM has 12 features (patient's parameters) (1) sex, (2) height, (3) weight, (4) glenoid version, (5) glenoid inclination, and physiological cross-sectional area (CSA) of the rotator cuff muscles: (6) supraspinatus (SS), (7) infraspinatus (IS), (8) subscapularis (SC), (9) teres minor (TM), (10) implant (anatomical/reversed), (11) activity, which is one of the three possibilities, abduction in the scapular plane, abduction in the scapula plane with a mass of 2 kg in hand, and abduction in the frontal plane with a mass of 2 kg in hand, and (12) abduction angle. The model is trained on abduction angles of 30, 40, 50, 60, 70, 80, 90, and 100 deg. It can predict GH at other abduction angles, but the evaluations that are reported in the article are based on these values. The target of the model is three components of the GH force in the scapula coordinate system. This notebook consists of two sections. First, the training, hyperparameter optimization, and evaluation of the model are provided. For this section, the \"data_for_model.csv\" is needed. The second section represents a widget for prediction with the model. For this section the \"model.h5\" file, which is the binary file of the optimized DLM is needed. The prediction section can be run independently of the training section."
+ ],
+ "metadata": {
+ "id": "cfbvz1d78nH-"
+ }
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "## Training the deep learning model (DLM)"
],
"metadata": {
"id": "7LFfBWBxJ8LW"
}
},
{
"cell_type": "markdown",
"source": [
"### Libraries"
],
"metadata": {
"id": "CDinoQQaOKbE"
}
},
+ {
+ "cell_type": "markdown",
+ "source": [
+ "The keras-tuner library is needed for hyperparameter optimization and needs to be installed (https://keras.io/keras_tuner/). If you are using google colab the following cell installs it."
+ "The data for training is provided in \"data_for_model.csv\". This data is read and splitted to 85% train and 15% test. The 12 features are saved as X (train and test) and the target as y (train and test)."
- "### A class for the model which is a subclass of the keras tuner hyper model (https://keras.io/keras_tuner/). The hyperparameters and their range for optimization are specified here"
+ "### Hyperparameter optimization\n",
+ "\n",
+ " A class for the model which is a subclass of the keras tuner hyper model (https://keras.io/keras_tuner/). The hyperparameters and their range for optimization are specified here."
],
"metadata": {
"id": "iP0LnUrbKp_X"
}
},
{
"cell_type": "code",
"source": [
"class MyHyperModel(keras_tuner.HyperModel):\n",
" def build(self, hp):\n",
" model = Sequential()\n",
" for i in range(hp.Int(\"num_layers\", 2, 10)):\n",
" # Add dense layers with units and activation based on hyperparameters\n",
" # Tune whether to shuffle the data in each epoch\n",
" shuffle=hp.Boolean(\"shuffle\"),\n",
" **kwargs,\n",
" )\n"
],
"metadata": {
"id": "RJNCWL6IKwHB"
},
- "execution_count": null,
+ "execution_count": 7,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
- "### Bayesian optimization for 200 trials to tune the hyperparameters based on the validation loss"
+ "### Bayesian optimization\n",
+ "\n",
+ " Bayesian optimization tool of keras-tuner is used for 100 trials to tune the hyperparameters based on the validation loss. Early stopping is used to stop the training if there is no improvement for 20 epochs and model checkpoint is used to save the best model."
],
"metadata": {
"id": "zwriTvjvLCoS"
}
},
{
"cell_type": "code",
"source": [
"tuner = keras_tuner.BayesianOptimization(\n",
" MyHyperModel(),\n",
" objective=\"val_loss\", # Optimization objective is to minimize validation loss\n",
- " max_trials=200, # Maximum number of trials for the hyperparameter search\n",
+ " max_trials=100, # Maximum number of trials for the hyperparameter search\n",
+ "The \"model.h5\" is loaded for evaluation. As the coeff_determination function which defined earlier, is not a part of keras, is it is needed to be specified as custom_objects in load_model function."
],
"metadata": {
"id": "NrtHpJ-1MBH8"
}
},
{
"cell_type": "code",
"source": [
"# Load the Keras model from the specified file\n",
"model = load_model('model.h5',\n",
"\n",
" # Specify custom objects for loading the model\n",
" custom_objects={\n",
" \"coeff_determination\": coeff_determination # Map the custom metric name to its function\n",
" }\n",
")\n"
],
"metadata": {
"id": "hu39UI6ZL83E"
},
- "execution_count": 5,
+ "execution_count": 8,
"outputs": []
},
{
"cell_type": "markdown",
"source": [
- "### Model evaluation on the test data with R2-score and mean_absolute_error of predictions compared to true values"
+ "### Model evaluation\n",
+ " The model is evaluated on the test data with R2-score and mean_absolute_error of predictions compared to true values."
],
"metadata": {
"id": "mEMDLy06MFkZ"
}
},
{
"cell_type": "code",
"source": [
"# Calculate and print the R2 scores for each component of GHF (GHFx, GHFy, GHFz)\n",
- "# You can do new predictions with the following user interface. The default values represents the virtual patient's parameters simulated in Fig. 3 of the article."
+ "## Prediction\n",
+ "The following widget can be used for predictions. The default values represents the virtual patient's parameters simulated in Fig. 3 of the article. Although any values can be selected for the weight, height, glenoid version and inclination, CSA of the rotator cuffs and abduction angle, the model is trained on the range of values of the parameters repersented in figures A2-A9 of the supplementary material of the article. For values of features which are far away from these distributions the predictions might not be reliable. Regarding the abudtion angle, the model is trained on 30, 40, 50, 60, 70, 80, 90, and 100 degrees and the following widget only expects these angles. The model is able to predict at other abduction angles, however, it will be more accurate at these specific ones. This widget serves as a simple interface for prediction. In the case of need for automation the model can be used directly with\n",
+ "\n",
+ "> GHForce = model.predict(X)\n",
+ "\n",
+ "with X being the data for prediction with the shape of (number_of_samples, 12), that 12 shows the number of features mentioned above, and the GHForce is the predicted GH force with the shape of (number_of_samples, 3) with 3 being the x, y, and z componenets of the force."
],
"metadata": {
"id": "PJg76X3UT2Jj"
}
},
{
"cell_type": "code",
"source": [
"import ipywidgets as widgets\n",
"import pandas as pd\n",
"import numpy as np\n",
"import tensorflow as tf\n",
"from IPython.display import clear_output as clear_ipython_output\n",
# Glenohumeral Joint Force Prediction with Deep Learning
This repository reflects the data and code of the study "Glenohumeral Joint Force Prediction with Deep Learning". The objective of the study was to predict glenohumeral joint force (the reaction force on the glneoid) with deep learning.
## Getting Started
-The following instructions will get you a copy of the project locally.
-* Clone the repository locally
-* Open the .ipynb file with jupyter (lab or notebook) or google colab.
-* Put the model.h5 file in the same directory (upload the file in the case of using google colab through left panels Files -> Upload to session storage).
-If you are using google colab you can follow the notebook, otherwise you need to install the libraries.
-https://www.tensorflow.org/install
+The following instructions will get you a copy of the project locally and help you to train the model or perform predictions with it.
+* Clone/download the repository
+
+You might use the code with google colab or locally with an installed jupyter (lab or notebook). In the case of google colab do as follows
+* Open google colab through https://colab.research.google.com/
+* Upload the MSM_DLM.ipynb from the cloned/downloaded repository directory to the google colab through Upload -> Choose File.
+* Upload the "data_for_model.csv" and "model.h5" through left panel Files -> Upload to session storage. If you want to perform only training then just upload "data_for_model.csv" and at the end of the training you will have a "model.h5". If you want to perform only predictions then just upload the "model.h5".
+* Follow the instructions in the notebook to train and evaluate the model or perform predictions with it.
+In the case of using jupter locally do as follows
+* Install the libraries.
+https://www.tensorflow.org/install
https://scikit-learn.org/stable/install.html
+https://keras.io/keras_tuner/
+* Open jupyter in the clone/downloaded repository directory.
+* Open the MSM_DLM.ipynb notebook file with jupyter.
+* Follow the instructions in the notebook to train and evaluate the model or perform prediction with it.
+
## Prerequisites
The model is coded with Python 3.10 and tensorflow 2.12.0.
## Authors
* Pezhman Eghbalishamsabadi (EPFL-LBO).
## Contributors
* Léa Pistorius (EPFL-LBO): performed first automatic simulation of the virtual subjects and developed a machine learning model to predict the glenohumeral joint force magnitude.