{"id":1725,"date":"2020-04-02T14:55:46","date_gmt":"2020-04-02T12:55:46","guid":{"rendered":"http:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/?p=1725"},"modified":"2022-09-07T11:06:07","modified_gmt":"2022-09-07T09:06:07","slug":"license-plate-recognition-using-neural-network","status":"publish","type":"post","link":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/2020\/04\/02\/license-plate-recognition-using-neural-network\/","title":{"rendered":"License Plate Recognition using Neural Network"},"content":{"rendered":"\n<p>At the school I work I am instructing a class called Design Cyber Physical Systems. The name of the class leaves many interpretations open about its content. However I leave this open intentionally. In the previous semesters, I let the students choose a topic, which has to do something with sensors, actors and micro-controllers. The students have to brainstorm a specific implementation idea, then they have to create a plan and execute the plan during the semester. This semester I changed the topic: This time the implementation idea has to contain a neural network and a camera system. <\/p>\n\n\n\n<p>Three students who participated in this class decided to create a system which takes images from an access road having a gate towards a parking lot. As soon as a car approaches, the camera takes an image of the car with its license plate. The system has to determine the position of the license plate on the image and extract the license plate. Algorithms figure out the characters and numbers and compare them with the license plates stored in a database. If the license plate is identical to one of the license plates in the database, the gate is opened. Due to organizational problems with accessing a parking lot gate, we decided to use a simple signal light instead.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Data Preprocessing<\/h2>\n\n\n\n<p>The systems&#8217;s application needs first to find the license plate on the car&#8217;s image. This can be done in various ways, but we chose to use a neural network. We need to have many training images, which are images of cars from the front. We need to mark the license plates on each image to receive a new mask image needed for the neural network training. <\/p>\n\n\n\n<p>There are already programs available, to download images, e.g. chromedriver. The program we used can be found <a href=\"https:\/\/github.com\/hardikvasa\/google-images-download\">here<\/a>. You can control the search criteria with the options, and the program downloads automatically numerous images. The search criteria in our case was simply  &#8220;license plate car&#8221;. Not every downloaded image served our purpose. We limited ourselves to German license plates, so we had to filter manually the useful images. Altogether we gathered around 760 training images and test images.  <\/p>\n\n\n\n<p>The next step was to label the areas of license plates of the downloaded images. We found a tool called <a href=\"https:\/\/github.com\/wkentaro\/labelme\">labelme<\/a>, which we had to install. Note that we managed to install only an older version of labelme on Ubuntu 18.04 (command: sudo pip3 install labelme==3.3.3). In Figure 1 you can see the tool labelme with an uploaded image. As a user you can click with the mouse around the license plate so a polygon is created. The points of the polygon can be saved into a json file. We have done this for all 760 images and 760 json files were created. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Bildschirmfoto-von-2020-03-28-14-21-43.png\" alt=\"\" class=\"wp-image-1727\" width=\"480\" height=\"424\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Bildschirmfoto-von-2020-03-28-14-21-43.png 600w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Bildschirmfoto-von-2020-03-28-14-21-43-300x265.png 300w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><figcaption>Figure 1: Labelme<\/figcaption><\/figure>\n<\/div>\n\n\n<p>The next step was to create mask images from the json files. Each json file contained a number of points which was parsed by the function <em>create_mask_from_image<\/em> below. It opens the json file, retrieves the points, and uses <em>skimage.draw<\/em>&#8216;s<em> polygon<\/em> method to create the mask image. Finally it stores the image to the <em>mask_path<\/em> directory. <\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from skimage.draw import polygon\n\ndef create_mask_from_image(path, json_file, img_width, img_height, file_number, output_dir):\n    json_path = os.path.join(path, json_file)\n\n    mask_name = \"img_\" + str(file_number) + \".png\"\n    mask_path = os.path.join(output_dir, \"masks\", mask_name)\n    \n    f = open(json_path)\n    data = json.load(f)\n\n    vertices = np.array([[point[1],point[0]] for point in data['shapes'][0]['points']])\n    vertices = vertices.astype(int)\n\n    img = np.zeros((img_height, img_width), 'uint8')\n\n    rr, cc = polygon(vertices[:,0], vertices[:,1], img.shape)\n    img[rr,cc] = 1\n\n    imsave(mask_path, img)<\/pre>\n\n\n\n<p>This function was repeated for all available json files. So at this moment we had all training images and mask images stored in the paths <em>dataset\/images\/ <\/em>and <em>dataset\/masks\/<\/em>. Note that the names of the training images and the corresponding mask images need to be identical.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Training of the Neural Network<\/h2>\n\n\n\n<p>What we are facing is a segmentation problem. After we feed the application with an image of a car, we want to receive an image indicating where we find the license plate. A U-Net can be used to solve such a problem. In this blog this was already discussed on several posts, so see earlier posts. This time however we used the library <em>keras_segmentation<\/em>. The model is returned by the <em>segnet<\/em> method. We have chosen to use images with a height of 350 and a width of 525.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">from keras_segmentation.models.segnet import segnet\n\nmodel = segnet(n_classes=2, input_height=350, input_width=525)\n\npath=\"\/home\/...\/dataset\/\"\n\nmodel.train(\n    train_images =  path+\"images\/\",\n    train_annotations = path+\"masks\/\",\n    checkpoints_path = \"\/tmp\/segnet\", epochs=3\n)\n\nmodel.save(\"weights.h5\")<\/pre>\n\n\n\n<p>The train method executes the training. On a NVIDIA 2070 graphic card it took about three minutes with three epochs with an accuracy of 99,4%. After training we saved the weights to a file called <em>weights.h5<\/em>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Testing of the Neural Network<\/h2>\n\n\n\n<p>We have put a few images aside to test the trained model. The code below loads the model by using the method <em>load_weights<\/em>. A test image is read and shown with matplotlib&#8217;s <em>imshow<\/em>.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">import matplotlib.pyplot as plt\nimport matplotlib.image as mpimg\n\nmodel.load_weights(\"weights.h5\")\nimg=mpimg.imread(\"\/home\/...\/img_1.jpg\")\nimgplot = plt.imshow(img)\nplt.show()<\/pre>\n\n\n\n<p>Figure 2 shows the test image from matplotlib. The license plate can be clearly seen at the lower\/middle part of the image.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"324\" height=\"216\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Auto.png\" alt=\"\" class=\"wp-image-1756\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Auto.png 324w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Auto-300x200.png 300w\" sizes=\"auto, (max-width: 324px) 100vw, 324px\" \/><figcaption>Figure 2: Test Image<\/figcaption><\/figure>\n<\/div>\n\n\n<p>To predict the license plate area on the image, we need to feed the test image into the trained model. This can be done with the <em>predict_segmentation<\/em> method. This method writes the predicted image to out.png. <\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">test_img = \"\/home\/...\/img_1.jpg\"\n\nout = model.predict_segmentation(\n    inp=test_img,\n    out_fname=\"dataset\/tests\/out.png\",\n)\n\nplt.imshow(out)<\/pre>\n\n\n\n<p>The code above calls the matplotlib method <em>imshow<\/em> and in Figure 3 you can see the predicted mask image derived from the test image. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"328\" height=\"217\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Segment.png\" alt=\"\" class=\"wp-image-1759\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Segment.png 328w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Segment-300x198.png 300w\" sizes=\"auto, (max-width: 328px) 100vw, 328px\" \/><figcaption>Figure 3: Predicted Mask<\/figcaption><\/figure>\n<\/div>\n\n\n<p>At the moment you cannot see how Figure 2 and Figure 3 overlap, so we wrote code to create an added image from the test image and the predicted mask image, see code below. Note that each pixel of the predicted mask image only has two values, zeros and ones. In order to add Figure 2 and Figure 3, we need to multiply the predicted mask image by 255. The OpenCV method <em>addWeighted<\/em> adds both images to a new image.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">orig_img = cv2.imread(test_img)\nout = out.astype('float32')\nout = cv2.resize(out, (orig_img.shape[1], orig_img.shape[0]))\nnew_out = np.zeros((orig_img.shape[0], orig_img.shape[1], 3), dtype=\"uint8\")\nnew_out[:,:,0] = out[:,:] * 255\norig_img = cv2.cvtColor(orig_img, cv2.COLOR_BGR2RGB)\nplt.imshow(cv2.addWeighted(orig_img, 0.5, new_out, 0.5, 0.0))<\/pre>\n\n\n\n<p>Matplotlib&#8217;s method <em>imshow<\/em> shows the added image, see Figure 4. You can see that both images align to each other very well. The license plate is highlighted with red color.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"324\" height=\"216\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Added-1.png\" alt=\"\" class=\"wp-image-1728\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Added-1.png 324w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Added-1-300x200.png 300w\" sizes=\"auto, (max-width: 324px) 100vw, 324px\" \/><figcaption>Figure 4: Added Images<\/figcaption><\/figure>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\">Position Detection of the License Plate<\/h2>\n\n\n\n<p>The next step is to find the position of the mask to receive a bounding box around the mask. We can use the OpenCV method <em>findContours<\/em> to receive the contour of the mask. The code below shows how we call <em>findContours.<\/em><\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">contours,_ = cv2.findContours(np.array(out, \"uint8\"), cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)\nplt.imshow(cv2.drawContours(orig_img, [contours[0]], -1, (255,0,0), 2))<\/pre>\n\n\n\n<p>Figure 5 shows the output image created by the OpenCV method <em>drawContours<\/em>. <\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"325\" height=\"217\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Contour.png\" alt=\"\" class=\"wp-image-1729\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Contour.png 325w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Contour-300x200.png 300w\" sizes=\"auto, (max-width: 325px) 100vw, 325px\" \/><figcaption>Figure 5: Mask&#8217;s Contour<\/figcaption><\/figure>\n<\/div>\n\n\n<p>The code below creates a bounding box from the contour around the license plate, which is assumed to be the first element of the output list <em>contours<\/em>.  The OpenCV method <em>boxPoints<\/em> finds the rectangle with the corner points <em>rect_corners <\/em>having the minimum area around the contour. OpenCV&#8217;s <em>drawContours<\/em> draws the bounding box with matplotlib.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">rect = cv2.minAreaRect(contours[0])\nrect_corners = cv2.boxPoints(rect)\nrect_corners = np.int0(rect_corners)\n\norig_img = mpimg.imread(test_img)\ncontour_img = cv2.drawContours(orig_img, [rect_corners], 0, (0,255,0),  2)\nplt.imshow(contour_img)<\/pre>\n\n\n\n<p>In Figure 6 you can see how matplotlib draws the bounding box around the license plate. It is not generally true that the edges of the bounding box are in parallel to the edges of the test images. It is very possible, that the bounding rectangle is warped. This is something you cannot see in Figure 6.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"322\" height=\"217\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Bounding.png\" alt=\"\" class=\"wp-image-1730\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Bounding.png 322w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Bounding-300x202.png 300w\" sizes=\"auto, (max-width: 322px) 100vw, 322px\" \/><figcaption>Figure 6: Box bounding the License Plate<\/figcaption><\/figure>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\">Warping the License Plate Image<\/h2>\n\n\n\n<p>Before recognizing the letters of the license plate image, we should transform the bounding box to a real rectangular shape. The function <em>order_points_clockwise<\/em> of the code below sorts the points of <em>rect_corners<\/em> clockwise with the first point on the upper left corner. It returns the rearranged list to<em> rect_corners_clockwise<\/em>.  The function <em>warp_img<\/em> extracts the license plate piece from the original test image and transform it to a real rectangle using the transformation methods  <em>getPerspectiveTransform<\/em> and <em>warpPerspective<\/em>. The method  <em>warpPerspective<\/em>  receives the width and height of the extracted license plate with the function <em>get_polygon_dimensions<\/em>. Note again that the extracted license plate is not a real rectangle, but rather rhombus.  The function <em>get_polygon_dimensions<\/em>  uses Pythagoras for approximating the width and height of the rhombus. OpenCV&#8217;s method  <em>getPerspectiveTransform<\/em> calcluates the transformation matrix and OpenCV&#8217;s method  <em>warpPerspective<\/em> transforms the license plate image so it as a real rectangle shape.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def get_polygon_dimensions(points):\n    from math import sqrt\n    (tl, tr, br, bl) = points\n    widthA = sqrt(((br[0] - bl[0]) ** 2) + ((br[1] - bl[1]) ** 2))\n    widthB = sqrt(((tr[0] - tl[0]) ** 2) + ((tr[1] - tl[1]) ** 2))\n    heightA = sqrt(((tr[0] - br[0]) ** 2) + ((tr[1] - br[1]) ** 2))\n    heightB = sqrt(((tl[0] - bl[0]) ** 2) + ((tl[1] - bl[1]) ** 2))\n\n    width = max(int(widthA), int(widthB))\n    height = max(int(heightA), int(heightB))\n\n    return (width, height)\n\ndef warp_img(img, points):\n    width, height = get_polygon_dimensions(points)\n    dst = np.array([\n        [0, 0],\n        [width - 1, 0],\n        [width - 1, height - 1],\n        [0, height - 1]], dtype = \"float32\")\n    \n    M = cv2.getPerspectiveTransform(points, dst)\n    warped_img = cv2.warpPerspective(img, M, (width, height))\n\n    return warped_img\n\ndef order_points_clockwise(pts):\n    rect = np.zeros((4, 2), dtype=\"float32\")\n  \n    s = pts.sum(axis=1)\n    rect[0] = pts[np.argmin(s)]\n    rect[2] = pts[np.argmax(s)]\n    \n    diff = np.diff(pts, axis=1)\n    rect[1] = pts[np.argmin(diff)]\n    rect[3] = pts[np.argmax(diff)]\n \n    return rect\n\nrect_corners_clockwise = order_points_clockwise(rect_corners)\norig_img = mpimg.imread(test_img)\nwarped_img = warp_img(orig_img, np.array(rect_corners_clockwise, \"float32\"))\nplt.imshow(warped_img)\n\ngray_img = cv2.cvtColor(warped_img, cv2.COLOR_RGB2GRAY)\n_,prediction_img = cv2.threshold(gray_img, 50, 255, cv2.THRESH_BINARY)\nplt.imshow(prediction_img)<\/pre>\n\n\n\n<p>The upper part of Figure 7 you can see the image of the license plate piece from the original test image. The lower part of Figure 7 you can see the warped license plate image. It is converted to grayscale image combined with a threshold filter. Note that the warping in Figure 7 does not show much a difference. However the rectangle edges do not necessarily need to be in parallel with the test image edges, so transformation is really needed in some cases.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"194\" height=\"68\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/Licenses.png\" alt=\"\" class=\"wp-image-1731\" \/><figcaption>Figure 7: License Plate<\/figcaption><\/figure>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\">Character Recognition<\/h2>\n\n\n\n<p>Figure 8 shows the set of reference characters for German license plates which are available as images for each character. In the code below the Figure, we set the directory with the reference character images to the variable <em>feletterspath<\/em>.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-medium is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/letters-300x203.png\" alt=\"\" class=\"wp-image-1737\" width=\"258\" height=\"175\" srcset=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/letters-300x203.png 300w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/letters-1024x692.png 1024w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/letters-768x519.png 768w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/letters-1536x1039.png 1536w, https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/letters.png 1763w\" sizes=\"auto, (max-width: 258px) 100vw, 258px\" \/><figcaption>Figure 8: Set of Characters for Reference<\/figcaption><\/figure>\n<\/div>\n\n\n<p>The main function from the code below is <em>get_prediction<\/em> which is called with a transformed and grayscaled license plate image as an input parameter.<\/p>\n\n\n\n<p>First the function <em>get_prediction<\/em> finds the contours of the license plate image. The contours are forwarded to the <em>_get_rectangles_around_letters<\/em> function. It checks all contours&#8217; height and width sizes with the function <em>_check_dimensions<\/em>. It simply figures out, if a contour has a similar height as the license plate image height and a similar width as one eighth of the license plate image width. If this is the case, there is a high probability that the contour is a character. The function <em>_get_rectangles_around_letters<\/em> sorts the contours from left to right using the sort function and moves the contours  into the list <em>rectangles<\/em>. The contours in the list have now a high probability that they are characters.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">feletterspath=\"\/home\/...\/feletters\/\"\n\ndef get_prediction(img):\n \n    img_dimensions = (660, 136)\n    img = cv2.resize(img, (img_dimensions))\n\n    contours,_ = cv2.findContours(img, cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)\n    \n    rectangles = _get_rectangles_around_letters(contours, img_dimensions)\n    \n    if len(rectangles) &lt; 3:\n        return\n    \n    letter_imgs = _get_letter_imgs(img, rectangles)\n    letters = _get_letter_predictions(letter_imgs)\n    letters = _add_space_characters(letters, rectangles)\n    return letters\n\n\ndef _check_dimensions(img_dimensions, rectangle):\n\n    img_width, img_height = img_dimensions\n    (x,y,w,h) = rectangle\n    letter_min_width, letter_max_width = img_width \/ 17, img_width \/ 8\n    letter_min_height, letter_max_height = img_height \/ 2, img_height \n    rectangle_within_dimensions = (w &gt; letter_min_width and w &lt; letter_max_width) \\\n        and (h &gt; letter_min_height and h &lt; letter_max_height)\n\n    return rectangle_within_dimensions\n\n\ndef _get_rectangles_around_letters(contours, img_dimensions):\n\n    rectangles = []\n\n    for contour in contours:\n\n        rectangle = cv2.boundingRect(contour)\n\n        has_letter_dimensions = _check_dimensions(img_dimensions, rectangle)\n        if has_letter_dimensions:\n            rectangles.append(rectangle)\n\n    rectangles.sort(key=lambda tup: tup[0])\n\n    return rectangles<\/pre>\n\n\n\n<p>The function <em>get_prediction<\/em> from the code above is calling the function <em>_get_letter_imgs<\/em> from the code below to extract the character images from the input license plate image and returns a list. The function <em>_get_letter_predictions<\/em> iterates through this list and executes the function <em>_match_fe_letter<\/em>. The function <em>_match_fe_letter<\/em> is iterating through the set of license plate reference characters (shown in Figure 8) and applies the OpenCV <em>matchTemplate<\/em> method after the images are resized to the same shapes. OpenCV&#8217;s <em>matchTemplate<\/em> returns value indicating the similarity of the license plate character with the reference character. The license plate  character with the highest similarity is chosen to be the matched character. Finally the function  <em>_get_letter_imgs<\/em>  returns a list of matched characters.<\/p>\n\n\n\n<p>Figure 7 shows a space between the &#8220;BL&#8221; and &#8220;LU&#8221; and a space between &#8220;LU&#8221; and &#8220;613&#8221;. The function<em> _add_space_characters<\/em> adds a blank character between the matched characters, if the spaces of the character images exceed a certain threshold (20 in the code below).<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">def _get_letter_imgs(img, rectangles):\n\n    letter_imgs = []\n\n    for rect in rectangles:\n        (x,y,w,h) = rect\n        current_letter = img[y:y+h, x:x+w]\n        letter_imgs.append(current_letter)\n        \n    return letter_imgs\n\n\ndef _get_letter_predictions(letter_imgs):\n\n    letters = \"\"\n\n    for letter_img in letter_imgs:\n        prediction = _match_fe_letter(letter_img)\n        letters += prediction\n\n    return letters\n\ndef _add_space_characters(letters, rectangles):\n\n    space_counter = 0\n    \n    for n,_ in enumerate(rectangles):\n        (x1,_,w1,_) = rectangles[n]\n        (x2,_,_,_) = rectangles[n+1]\n        distance = x2-(x1+w1)\n        \n        if distance &gt; 20:\n            index = n + 1 + space_counter\n            space_counter += 1\n            letters = letters[:index] + ' ' + letters[index:]\n        \n        if n == len(rectangles)-2:\n            break\n            \n    return letters\n\n\ndef _match_fe_letter(img):\n\n    fe_letter_dir = feletterspath\n\n    similarities = []\n\n    for template_img in sorted(os.listdir(fe_letter_dir)):\n        template = cv2.imread(os.path.join(fe_letter_dir, template_img), cv2.IMREAD_GRAYSCALE)\n        img = cv2.resize(img, (template.shape[1], template.shape[0]))\n        similarity = cv2.matchTemplate(img,template,cv2.TM_CCOEFF_NORMED)[0][0]\n        similarities.append(similarity)\n    \n    letter_array = [os.path.splitext(letter)[0]\n        for letter in sorted(os.listdir(fe_letter_dir))]\n\n    letter = letter_array[similarities.index(max(similarities))]\n\n    return letter<\/pre>\n\n\n\n<p>The function <em>get_prediction<\/em> is called several times with differently processed input images, see code below. The code calls OpenCV&#8217;s <em>threshold <\/em>with a range of thresholds and feeds the images into the method <em>get_prediction<\/em>. The result is appended to the <em>results<\/em> list.<\/p>\n\n\n\n<pre class=\"EnlighterJSRAW\" data-enlighter-language=\"generic\" data-enlighter-theme=\"\" data-enlighter-highlight=\"\" data-enlighter-linenumbers=\"\" data-enlighter-lineoffset=\"\" data-enlighter-title=\"\" data-enlighter-group=\"\">results = []\n\nfor i in range(50,200,10):\n    _,prediction_img = cv2.threshold(gray_img, i, 255, cv2.THRESH_BINARY)\n    prediction = get_prediction(prediction_img)\n    if prediction is not None:\n        results.append((i,prediction))<\/pre>\n\n\n\n<p>Figure 9 shows the list of results from the license plate&#8217;s input image. You can see that the code correctly predicted the license plate six times. Here the majority of the same predictions can be used as a final predicted output.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"161\" height=\"193\" src=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/files\/2020\/03\/output-1.png\" alt=\"\" class=\"wp-image-1732\" \/><figcaption>Figure 9: Result List<\/figcaption><\/figure>\n<\/div>\n\n\n<h2 class=\"wp-block-heading\">Conclusion<\/h2>\n\n\n\n<p>In this blog I described a gate opening system designed by students from the class Design Cyber Physical Systems. The idea was, that a car approaches the gate, and a camera system takes images from the car including its license plate. We trained a neural network to receive a mask indicating the license plate&#8217;s position on the image. The application extracted the license plate with the mask from the image and applied character recognition supplied by OpenCV.<\/p>\n\n\n\n<p>The application was actually distributed over two computers. One computer (raspberry pi) took images and controlled the output relay, the other computer calculated the mask image with a neural network. Actually we did not open a gate as described in the introduction. We connected a signal light to a relay which was controlled by the raspberry pi. The communication between both computers was realized by a REST interface.<\/p>\n\n\n\n<p>The character recognition only worked well, if we fed the license plate image several times with different thresholds. A majority vote was taken to choose the recognized license plate number. <\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Acknowledgement<\/h2>\n\n\n\n<p>Thanks to Jonas Acker,  Marc Bitzer and Thomas Sch\u00f6ller for participating at the class Design Cyber Physical Systems and providing the code which was the result of the project from this class.<\/p>\n\n\n\n<p>Also special thanks to the University of Applied Science Albstadt-Sigmaringen offering a classroom and appliances to enable this research. <\/p>\n","protected":false},"excerpt":{"rendered":"<p>At the school I work I am instructing a class called Design Cyber Physical Systems. The name of the class leaves many interpretations open about its content. However I leave this open intentionally. In the previous semesters, I let the students choose a topic, which has to do something with sensors, actors and micro-controllers. The &hellip; <a href=\"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/2020\/04\/02\/license-plate-recognition-using-neural-network\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">License Plate Recognition using Neural Network<\/span><\/a><\/p>\n","protected":false},"author":24,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[4,3,5,7,15],"class_list":["post-1725","post","type-post","status-publish","format-standard","hentry","category-allgemein","tag-ai","tag-deep-learning","tag-ki","tag-neural-network","tag-object-detection"],"_links":{"self":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts\/1725","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/users\/24"}],"replies":[{"embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/comments?post=1725"}],"version-history":[{"count":396,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts\/1725\/revisions"}],"predecessor-version":[{"id":4842,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/posts\/1725\/revisions\/4842"}],"wp:attachment":[{"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/media?parent=1725"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/categories?post=1725"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www3.hs-albsig.de\/wordpress\/point2pointmotion\/wp-json\/wp\/v2\/tags?post=1725"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}