Road crashes are an endemic problem worldwide, and it is the number one cause of death for young people aged from 5 to 29 years old. Speed limit is a crucial information for assessing safety and safety requirements for a road segment. Following the International Road Assessment Programme (iRAP) methodology, it can be registered based on visual imagery input. In this research, we have trained three different versions of You Only Look Once (YOLO) models – YOLOv5nu, YOLOv8n and YOLO11n –  to automatically identify and classify speed limit signs, using two public datasets. The best mean average precision achieved was of 0.783 so, to improve accuracy, we have retrained the YOLO models to identify the speed limit signs and classification was made using Optical Character Recognition (OCR) model. With this combination, the best mean average precision was set to 0.845, while standalone mean average precision for OCR reaches 0.976 when applied to ground truth cropped images. After the training, the model pipeline was tested on real video data imagery covering 64 km of northern Italy, from the provinces of Udine and Gorizia, and a coding pipeline was used to convert the frame-by-frame automated detection into timestamps, which were further associated with their respective geographic locations and results were compared with manually coded iRAP data. Overall precision of the model was of 89% for the test area, setting it close to state-of-the-art results. Future research steps include training the model to differentiate cancelling and temporary speed limit signs for a more flexible approach.