{"id":1143,"date":"2020-03-18T17:16:46","date_gmt":"2020-03-18T16:16:46","guid":{"rendered":"http:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/?p=1143"},"modified":"2022-09-07T11:08:28","modified_gmt":"2022-09-07T09:08:28","slug":"centromere-position-on-a-chromosome-image-using-a-neural-network","status":"publish","type":"post","link":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/2020\/03\/18\/centromere-position-on-a-chromosome-image-using-a-neural-network\/","title":{"rendered":"Centromere Position on a Chromosome Image using a Neural Network"},"content":{"rendered":"\n<p>Chromosomes have one short arm and one long arm. The centromere sits in between and links both arms together. Biologists find it convenient that an application can spot automatically the position of the centromere on a chromosome image. In general for image processing, it is useful for an application to know the centromere position to simplify the classification of the chromosome type.<\/p>\n\n\n\n<p>With this project we want to show how an application can get the centromere positions by using a neuronal network. In order to train the neuronal network, we need sufficient training data. We show here, how we created the training data.  A position in an image is a coordinate with two numbers. The application must therefore use an neuronal network with an regression layer as an output. In this post we show what kind of neuronal network we used for retrieving a position from an image.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Creating the Training Data<\/h2>\n\n\n\n<p>Previously we created with a tool around 3000 images from several complete chromosome images. We do not go much into detail about this tool. The tool works in a way that it loads in and shows a complete chromosome image with its 46 chromosomes and as an user we can select with the mouse a square on this image. The content of the square is then saved as a 128&#215;128 chromosome image and as a 128&#215;128 telomere image. Figure 1 shows an example of both images. We have created around 3000 chromosome and telomere images from different positions.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"457\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/chrtel-1024x457.png\" alt=\"\" class=\"wp-image-1153\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/chrtel-1024x457.png 1024w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/chrtel-300x134.png 300w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/chrtel-768x342.png 768w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/chrtel.png 1054w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Figure 1: Chromosome Image and Telomere Image<\/figcaption><\/figure>\n\n\n\n<p>Each time we save the chromosome and telomere images, the application updates a csv file with the name of the chromosome (<em>chrname<\/em>) and the name of the telomere (<em>telname<\/em>) using the <em>write<\/em> function of the code below. It uses the library pandas to concat rows to the end of a csv file with the name <em>f_name<\/em>.   <\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def write(chrname, telname, x, y): \n  \n    if isfile(f_name): \n  \n        df = pd.read_csv(f_name, index_col = 0) \n        data = [{'chr': chrname, 'tel': telname, 'x': x, 'y':y}] \n        latest = pd.DataFrame(data) \n        df = pd.concat((df, latest), ignore_index = True, sort = False) \n    else: \n        data = [{'chr': chrname, 'tel': telname, 'x': x, 'y':y}] \n        df = pd.DataFrame(data) \n\n    df.to_csv(f_name) <\/pre>\n\n\n\n<p>In the code above, you can see that a <em>x<\/em> and a <em>y<\/em> value is stored into the csv file, as well. This is the position of the centromere of the chromosome on the chromosome image. At this point of time, the position is not known yet. We need a tool, where we can click on each image to mark the centromere position. The code of the tool is shown below. There are two parts. The first part is the callback function <em>click<\/em>. It is called as soon as the user of the application presses a mouse button or moves the mouse. If the left mouse button is pressed, then the actual mouse position on the <em>conc<\/em> window is moved to the variable <em>refPt<\/em>. The second part of the tool loads in the a chromosome image from a directory <em>chrdestpath<\/em> and a telomere image from a directory<em> teldestpath<\/em> into a window named <em>conc<\/em>. The function <em>makecolor<\/em> (this function is described below) adds both images together to one image. The user can select with the mouse the centromere position and a cross appears on the clicking position, Figure 2. The application stores the position <em>refPt<\/em> by pressing the key &#8220;s&#8221; into the pandas data frame <em>df<\/em>. After this, the application loads in the next chromosome image from the directory <em>chrdestpath<\/em> and the next telomere image from the directory<em> teldestpath<\/em>.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">refPt = (0,0)\nmouseevent = False\ndef click(event,x,y,flags,param):\n    global refPt\n    global mouseevent\n    if event == cv2.EVENT_LBUTTONDOWN:\n        refPt = (x,y)\n        mouseevent = True\n\ncv2.namedWindow('conc')\ncv2.setMouseCallback(\"conc\", click)\n\ntheEnd = False\ntheNext = False\n\nimg_i=0\nimgstart = 0\nassert imgstart &lt; imgcount\n\ndf = pd.read_csv(f_name, index_col = 0) \n\nfor index, row in df.iterrows():\n \n    if img_i &lt; imgstart:\n        img_i = img_i + 1\n        continue\n        \n    chrtest = cv2.imread(\"{}{}\".format(chrdestpath,row[\"chr\"]),1)\n    teltest = cv2.imread(\"{}{}\".format(teldestpath,row[\"tel\"]),1)\n    \n    conc = makecolor(chrtest, teltest)\n    concresized = np.zeros((conc.shape[0]*2, conc.shape[1]*2,3), np.uint8)\n    concresized = cv2.resize(conc, (conc.shape[0]*2,conc.shape[1]*2), interpolation = cv2.INTER_AREA)\n    \n    refPt = (row[\"y\"],row[\"x\"])\n    cv2.putText(concresized,row[\"chr\"], (2,12), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (0, 255, 0), 1, cv2.LINE_AA)\n\n    while True:\n        cv2.imshow('conc',concresized)\n        key = cv2.waitKey(1)\n        if mouseevent == True:\n            print(refPt[0], refPt[1])\n            concresized = cross(concresized, refPt[0], refPt[1], (255,255,255))\n            mouseevent = False\n        if key &amp; 0xFF == ord(\"q\") :\n            theEnd = True\n            break\n        if key &amp; 0xFF == ord(\"s\") :\n            df.loc[df[\"chr\"] == row[\"chr\"], \"x\"] = refPt[1]\/\/2\n            df.loc[df[\"chr\"] == row[\"chr\"], \"y\"] = refPt[0]\/\/2\n            theNext = True\n            break\n        if key &amp; 0xFF == ord(\"n\") :\n            theNext = True\n            break\n    if theEnd == True:\n        break\n    if theNext == True:\n        theNext = False\n           \ndf.to_csv(f_name) \ncv2.destroyAllWindows()<\/pre>\n\n\n\n<p>Figure 2 shows the cross added to the centromere position selected by the user. This procedure was done around 3000 times on chromosome and telomere images, so the output was a csv file with 3000 chromosome image names, telomere image names, and centromere positions.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centromere.png\" alt=\"\" class=\"wp-image-1146\" width=\"350\" height=\"320\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centromere.png 731w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centromere-300x274.png 300w\" sizes=\"auto, (max-width: 350px) 100vw, 350px\" \/><figcaption>Figure 2: Centromere Postion on a Chromosome Image<\/figcaption><\/figure>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\">Augmenting the Data<\/h2>\n\n\n\n<p>In general 3000 images are too few images to train a neuronal network, so we augmented the images to have more training data. This was done by mirrowing all chromosome images (and its centromere positions) on the horizontal axis and on the vertical axis. This increased the number of images to 12000. The code below shows the <em>load_data<\/em> function to load the training data or validation_data into arrays<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def load_data(csvname, chrdatapathname, teldatapathname):\n    X_train = []\n    y_train = []\n    \n    assert isfile(csvname) == True\n    df = pd.read_csv(csvname, index_col = 0) \n    for index, row in df.iterrows():\n                          \n        chrname = \"{}{}\".format(chrdatapathname,row[\"chr\"])\n        telname = \"{}{}\".format(teldatapathname,row[\"tel\"])\n    \n        chrimg = cv2.imread(chrname,1)\n        telimg = cv2.imread(telname,1)                  \n                 \n        X_train.append(makecolor(chrimg, telimg))\n        y_train.append((row['x'],row['y']))\n    return X_train, y_train<\/pre>\n\n\n\n<p>In the code above you find a <em>makecolor <\/em>function. <em>makecolor<\/em> copies the grayscale images of the chromosome into the green layer of a new color image and the telomere image into the red layer of the same color image, see code of the function <em>makecolor<\/em> below.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def makecolor(chromo, telo):\n\n    chromogray = cv2.cvtColor(chromo, cv2.COLOR_BGR2GRAY)\n    telogray = cv2.cvtColor(telo, cv2.COLOR_BGR2GRAY)\n    \n    imgret = np.zeros((imgsize, imgsize,3), np.uint8)\n    \n    imgret[0:imgsize, 0:imgsize,1] = chromogray\n    imgret[0:imgsize, 0:imgsize,0] = telogray\n    \n    return imgret<\/pre>\n\n\n\n<p>Below the function code<em> mirrowdata<\/em> to flip the images horizontally or vertically. It uses the parameter flip to control the flipping of the image and its centromere position.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\"> def mirrowdata(data, target, flip=0): \n    xdata = []\n    ytarget = []\n\n    for picture in data:\n        xdata.append(cv2.flip(picture, flip))\n        \n    for point in target:\n        if flip == 0:\n            ytarget.append((imgsize-point[0],point[1]))\n        if flip == 1:\n            ytarget.append((point[0],imgsize-point[1]))\n        if flip == -1:\n            ytarget.append((imgsize-point[0],imgsize-point[1]))\n    \n    return  xdata, ytarget<\/pre>\n\n\n\n<p>The following code loads in the training data into the array <em>train_data<\/em> and the array <em>train_target<\/em>. <em>train_data<\/em> contains color images of the chromosomes and telomeres and <em>train_target<\/em> contains the centromere positions. The <em>mirrowdata<\/em> function is applied twice on the data with different flip parameter settings. After this, the data is converted to numpy arrays. This needs to be done to be able to normalize the images with the mean function and the standard deviation function. This is done for 10000 images among the 12000 images for the training data. The same is done with the remaining 2000 images for the validation data.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">train_data, train_target = load_data(csvtrainname, chrtrainpath, teltrainpath)\ntrain_mirrow_data, train_mirrow_target = mirrowdata(train_data, train_target, 0)\ntrain_data = train_data + train_mirrow_data\ntrain_target = train_target + train_mirrow_target\ntrain_mirrow_data, train_mirrow_target = mirrowdata(train_data, train_target, 1)\ntrain_data = train_data + train_mirrow_data\ntrain_target = train_target + train_mirrow_target\n\ntrain_data = np.array(train_data, dtype=np.float32)\ntrain_target = np.array(train_target, dtype=np.float32)\n\ntrain_data -= train_data.mean()\ntrain_data \/= train_data.std()<\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Modeling and Training the Neuronal Network<\/h2>\n\n\n\n<p>Since we have images we want to feed into the neural network, we decided to use a neuronal network with convolution layers. We started with a Layer having 32 filters. As input for training data we need images with size <em>imgsize<\/em>, which is in our case 128. After each convolution layer we added the max pooling function with<em> pool_size=(2,2)<\/em> which reduces the size of the input data by half. The output is fed into the next layer. Altogether we have four convolution layers. The number of filters, we increase after each layer. After the fourth layer we flatten the network and feed this into the first dense layer. Then we feed the output into the second dense layer having only two neurons. The activation function is a linear function. This means, we will receive two float values, which is supposed to be the position of the centromere. As a loss function we decided to use the <em>mean_absolute_percentage_error<\/em>.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">model = Sequential()\nmodel.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01), padding='same', input_shape=(imgsize, imgsize, 3)))\nmodel.add(MaxPooling2D((2, 2)))\nmodel.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01), padding='same'))\nmodel.add(MaxPooling2D((2, 2)))\nmodel.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01), padding='same'))\nmodel.add(MaxPooling2D((2, 2)))\nmodel.add(Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform', kernel_regularizer=l2(0.01), bias_regularizer=l2(0.01), padding='same'))\nmodel.add(MaxPooling2D((2, 2)))\nmodel.add(Flatten())\nmodel.add(Dense(128, activation='relu', kernel_initializer='he_uniform'))\nmodel.add(Dropout(0.1))\nmodel.add(Dense(2, activation='linear'))\nopt = Adam(lr=1e-3, decay=1e-3 \/ 200)\nmodel.compile(loss=\"mean_absolute_percentage_error\", optimizer=opt, metrics=['accuracy'])<\/pre>\n\n\n\n<p>We start the training with the fit method. The input parameters are the list of colored and normalized chromosome images (<em>train_data<\/em>), the list of centromere positions (<em>train_target<\/em>), and the validation data (<em>valid_data, valid_target<\/em>). A callback function was defined to stop the training as soon as there is no progress seen. Also checkpoints are saved automatically, e.g. if there is progress during the training.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">callbacks = [\n    EarlyStopping(patience=10, verbose=1),\n    ReduceLROnPlateau(factor=0.1, patience=3, min_lr=0.00001, verbose=1),\n    ModelCheckpoint('modelregr.h5', verbose=1, save_best_only=True, save_weights_only=True)\n]\n\nmodel.fit(train_data, train_target, batch_size=20, nb_epoch=50, callbacks=callbacks, verbose=1, validation_data=(valid_data, valid_target) )<\/pre>\n\n\n\n<p>The training took around five minutes on a NVIDIA 2070 graphics card. The accuracy is 0.9462 and the validation accuracy is 0.9258. This shows a small overfitting. The loss function shows the same overfitting result.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Testing<\/h2>\n\n\n\n<p>We kept a few chromosome images and telomere images aside for testing and predicting. The images were stored in a <em>test_data<\/em> array and normalized before prediction. The prediction was done with the following code.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">predictions_test = model.predict(test_data, batch_size=50, verbose=1)<\/pre>\n\n\n\n<p><em>prediction_test<\/em> contains now all predicted centromere positions. Figure 3 shows the positions added to the chromosome images. We can see that the position of the cross is pretty close to the centromere. However there are deviations. <\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"515\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centrpospredict-1024x515.png\" alt=\"\" class=\"wp-image-1280\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centrpospredict-1024x515.png 1024w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centrpospredict-300x151.png 300w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centrpospredict-768x386.png 768w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centrpospredict-1536x773.png 1536w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/centrpospredict.png 1864w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Figure 3: Predicted centromere positions<\/figcaption><\/figure>\n\n\n\n<p>For displaying the chromosomes as shown as in Figure 3 we use the following <em>showpics<\/em> function. Note in case you want to use this code, you have to be aware that the input images may not be normalized, otherwise you see see a black image.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def showpics(data, target, firstpics=0, lastpics=8):\n    chrtail=[]\n    pnttail=[]\n    columns = 4\n    print(data[0].shape)\n    for i in range(firstpics, lastpics):\n        chrtail.append(data[i])\n        pnttail.append(target[i])\n    rows = (lastpics-firstpics)\/\/columns\n    fig=figure(figsize=(16, 4*rows))\n    for i in range(columns*rows):\n        point = pnttail[i]\n        fig.add_subplot(rows, columns, i+1)\n        pic = np.zeros((chrtail[i].shape[0], chrtail[i].shape[1],3), np.uint8)\n        pic[0:pic.shape[0], 0:pic.shape[1], 0] = chrtail[i][0:pic.shape[0], 0:pic.shape[1], 0]\n        pic[0:pic.shape[0], 0:pic.shape[1], 1] = chrtail[i][0:pic.shape[0], 0:pic.shape[1], 1]\n        pic[0:pic.shape[0], 0:pic.shape[1], 2] = chrtail[i][0:pic.shape[0], 0:pic.shape[1], 2]\n        imshow(cross(pic, int(point[1]), int(point[0]), (255,255,255)))    <\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>The intention of this project was to show how to use linear regression as the last layer of the neuronal network. We wanted to get a position (coordinates) from an image. <\/p>\n\n\n\n<p>Firstly we marked about 3000 centromere positions of chromosomes and telomere images with a tool we created. Then we augmented the data to increase the data to 12000 images. We augmented the data by horizontal and vertical flipping.<\/p>\n\n\n\n<p>Secondly we trained a multilayer convolutional neural network with four convolutional layers and two dense layers. The last dense layer has two neurons. One for each coordinate.<\/p>\n\n\n\n<p>The prediction result was fairly good, considering the little effort we used to optimize the model. On Figure 3 we still can see that the centromere position is not always hit on the right spot. We expect improvement after we will add more data and optimize the model.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Chromosomes have one short arm and one long arm. The centromere sits in between and links both arms together. Biologists find it convenient that an application can spot automatically the position of the centromere on a chromosome image. In general for image processing, it is useful for an application to know the centromere position to &hellip; <a href=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/2020\/03\/18\/centromere-position-on-a-chromosome-image-using-a-neural-network\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Centromere Position on a Chromosome Image using a Neural Network<\/span><\/a><\/p>\n","protected":false},"author":24,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[4,3,5,7,14],"class_list":["post-1143","post","type-post","status-publish","format-standard","hentry","category-allgemein","tag-ai","tag-deep-learning","tag-ki","tag-neural-network","tag-unet"],"_links":{"self":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts\/1143","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/users\/24"}],"replies":[{"embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/comments?post=1143"}],"version-history":[{"count":232,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts\/1143\/revisions"}],"predecessor-version":[{"id":4844,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts\/1143\/revisions\/4844"}],"wp:attachment":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/media?parent=1143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/categories?post=1143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/tags?post=1143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}