使用卷積神經(jīng)網(wǎng)絡(luò)預(yù)防疲勞駕駛事故
掃描二維碼
隨時(shí)隨地手機(jī)看文章
疲勞駕駛:一個(gè)嚴(yán)重的問題
美國國家公路交通安全管理局估計(jì),每年有 91,000 起車禍涉及疲勞駕駛的司機(jī),造成約50,000 人受傷和近 800 人死亡。此外,每 24 名成年司機(jī)中就有 1 人報(bào)告在過去 30 天內(nèi)在駕駛時(shí)睡著了。研究甚至發(fā)現(xiàn),超過20個(gè)小時(shí)不睡覺相當(dāng)于血液酒精濃度為0.08%——美國的法律規(guī)定的上限。由于這個(gè)嚴(yán)重的問題,我和一組其他數(shù)據(jù)科學(xué)家開始開發(fā)一種神經(jīng)網(wǎng)絡(luò),可以檢測(cè)眼睛是否閉著,當(dāng)與計(jì)算機(jī)視覺結(jié)合使用時(shí),可以檢測(cè)活人是否閉著眼睛超過一秒鐘。這種技術(shù)對(duì)于任何對(duì)提高駕駛安全性感興趣的人都很有用,包括商業(yè)和日常司機(jī)、汽車公司和汽車保險(xiǎn)公司。目錄構(gòu)建卷積神經(jīng)網(wǎng)絡(luò)網(wǎng)絡(luò)攝像頭應(yīng)用程序數(shù)據(jù)采集我們使用了多個(gè)來源的完整面部數(shù)據(jù),即麻省大學(xué)阿默斯特分校的睜眼面部數(shù)據(jù)和南京大學(xué)的閉眼面部數(shù)據(jù)。然后,我們使用一個(gè)簡(jiǎn)單的 Python 函數(shù)從這個(gè)數(shù)據(jù)集中裁剪出眼睛,只剩下 30,000 多張裁剪后的眼睛圖像。我們?yōu)槊總€(gè)圖像裁剪添加了一個(gè)緩沖區(qū),不僅可以獲取眼睛,還可以獲取眼睛周圍的區(qū)域。此裁剪功能稍后將重新用于網(wǎng)絡(luò)攝像頭部分。# For loop going through each image file for folder in os.listdir(folders): for file in os.listdir(folders '/' folder):
# Using Facial Recognition Library on Image image = face_recognition.load_image_file(folders '/' folder '/' file) # create a variable for the facial feature coordinates face_landmarks_list = face_recognition.face_landmarks(image)
# create a placeholder list for the eye coordinates eyes = [] try: eyes.append(face_landmarks_list[0]['left_eye']) eyes.append(face_landmarks_list[0]['right_eye']) except: continue # establish the max x and y coordinates of the eye for eye in eyes: x_max = max([coordinate[0] for coordinate in eye]) x_min = min([coordinate[0] for coordinate in eye]) y_max = max([coordinate[1] for coordinate in eye]) y_min = min([coordinate[1] for coordinate in eye]) # establish the range of x and y coordinates x_range = x_max - x_min y_range = y_max - y_min
# to make sure the full eye is captured, # calculate the coordinates of a square that has 50% # cushion added to the axis with a larger range if x_range > y_range: right = round(.5*x_range) x_max left = x_min - round(.5*x_range) bottom = round(((right-left) - y_range))/2 y_max top = y_min - round(((right-left) - y_range))/2 else: bottom = round(.5*y_range) y_max top = y_min - round(.5*y_range) right = round(((bottom-top) - x_range))/2 x_max left = x_min - round(((bottom-top) - x_range))/2
#crop original image using the cushioned coordinates im = Image.open(folders '/' folder '/' file) im = im.crop((left, top, right, bottom))
# resize image for input into our model im = im.resize((80,80))
# save file to output folder im.save('yourfolderforcroppedeyes')
# increase count for iterative file saving count = 1 # print count every 200 photos to monitor progress if count % 200 == 0: print(count)
# Call function to crop full-face eye imageseye_cropper('yourfullfaceimagefolder') 以下是我們用來訓(xùn)練模型的數(shù)據(jù)示例:
創(chuàng)建卷積神經(jīng)網(wǎng)絡(luò)確定指標(biāo)因?yàn)轭A(yù)測(cè)正面類別(熟睡的司機(jī))對(duì)我們來說比預(yù)測(cè)負(fù)面類別(清醒的司機(jī))更重要,所以我們最重要的指標(biāo)是召回率(敏感性)。召回率越高,模型錯(cuò)誤地預(yù)測(cè)清醒(假陰性)的睡眠驅(qū)動(dòng)程序的數(shù)量就越少。這里唯一的問題是我們的正面類別明顯多于我們的負(fù)面類別。因此,最好使用F1 分?jǐn)?shù)或 Precision-Recall AUC 分?jǐn)?shù),因?yàn)樗鼈冞€考慮了我們猜測(cè)駕駛員睡著但實(shí)際上清醒(精確)的次數(shù)。否則我們的模型將始終預(yù)測(cè)我們處于睡眠狀態(tài),無法使用。另一種我們?cè)谔幚聿黄胶鈭D像數(shù)據(jù)時(shí)沒有使用的方法是使用圖像增強(qiáng),我沒有在這里使用它,但是 Jason Brownlee 在解釋如何在這里使用它方面做得很好。準(zhǔn)備圖像數(shù)據(jù)下一步是導(dǎo)入圖像并用模型進(jìn)行預(yù)處理。本節(jié)所需的導(dǎo)入:
count = 1 if count % 500 == 0: print('Succesful Image Import Count = ' str(count))
return images
folder="../data/train/new_open_eyes"open_eyes = load_images_from_folder(folder, 0)
folder="../data/train/New_Closed_Eyes"closed_eyes = load_images_from_folder(folder, 1)eyes = close_eyes open_eyes 設(shè)置變量,獨(dú)立的 X 是圖像,依賴的 y 是相應(yīng)的標(biāo)簽(1 表示閉眼,0 表示睜眼):
卷積層:該層創(chuàng)建像素子集而不是完整圖像,并允許更快的模型。根據(jù)設(shè)置的過濾器數(shù)量,這可能比原始圖像的密度更高或更低,但它們將使模型能夠使用更少的資源了解更復(fù)雜的關(guān)系。我們使用 32 個(gè)過濾器,使用至少一個(gè)卷積層,通常需要兩個(gè)或更多。對(duì)我們來說,最佳設(shè)置是將兩個(gè)3x3組合在一起,然后將三個(gè)3x3組合在一起。CNN 的總體趨勢(shì)是使用較小的濾波器尺寸。事實(shí)上,雙3x3層與5x5層基本相同,但速度更快,通常會(huì)產(chǎn)生更好的分?jǐn)?shù)。壓平確保展平圖像陣列,以便它可以進(jìn)入密集層。密集層層越密集,我們的模型訓(xùn)練所需的時(shí)間就越長,隨著這些層中神經(jīng)元數(shù)量的增加,網(wǎng)絡(luò)學(xué)習(xí)到的關(guān)系的復(fù)雜性也會(huì)增加。一般來說,通常卷積層的想法是為了避免產(chǎn)生過深的密集層方案。在我們的模型中我們使用了三層,神經(jīng)元的relu激活率呈下降趨勢(shì)(256、128、64)。我們還在每一層之后使用了 30% 的dropout。輸出層最后,因?yàn)檫@是一個(gè)二進(jìn)制分類問題,請(qǐng)確保對(duì)外層使用 sigmoid 激活。編譯模型在 model.compile()中,我們需要將指標(biāo)設(shè)置為 PR AUC(tf.keras.metrics.AUC (curve = 'PR')在 tensorflow 中)或召回率(tf.keras.metrics.recall在 tensorflow 中)。將損失設(shè)置為二進(jìn)制交叉熵,因?yàn)檫@通常是一個(gè)二進(jìn)制分類模型和一個(gè)好的優(yōu)化器。擬合模型將批量大小設(shè)置得盡可能大。我在 Google Colab 的 32 GB TPU 上運(yùn)行了 gridsearch,它輕松運(yùn)行了 1000 多個(gè)批次。如有疑問,請(qǐng)嘗試 32 個(gè)批次,如果沒有使內(nèi)存過載,則增加。就epochs 而言,20 個(gè) epochs 后收益遞減,所以我不會(huì)比這個(gè)特定的 CNN 高太多。以下是 Tensorflow Keras 的完整設(shè)置:
# Adding first three convolutional layersmodel.add(Conv2D( filters = 32, # number of filters kernel_size = (3,3), # height/width of filter activation = 'relu', # activation function input_shape = (80,80,3) # shape of input (image) ))model.add(Conv2D( filters = 32, # number of filters kernel_size = (3,3), # height/width of filter activation = 'relu' # activation function ))model.add(Conv2D( filters = 32, # number of filters kernel_size = (3,3), # height/width of filter activation = 'relu' # activation function ))
# Adding pooling after convolutional layersmodel.add(MaxPooling2D(pool_size = (2,2))) # Dimensions of the region that you are pooling
# Adding second set of convolutional layersmodel.add(Conv2D( filters = 32, # number of filters kernel_size = (3,3), # height/width of filter activation = 'relu' # activation function ))model.add(Conv2D( filters = 32, # number of filters kernel_size = (3,3), # height/width of filter activation = 'relu' # activation function ))
# Add last pooling layer.model.add(MaxPooling2D(pool_size=(2,2)))
model.add(Flatten())
# Adding first dense layer with 256 nodesmodel.add(Dense(256, activation='relu'))
# Adding a dropout layer to avoid overfittingmodel.add(Dropout(0.3))
model.add(Dense(128, activation='relu'))model.add(Dropout(0.3))
model.add(Dense(64, activation='relu'))model.add(Dropout(0.3))
# adding output layermodel.add(Dense(1, activation = 'sigmoid'))
# compiling the modelmodel.compile(loss='binary_crossentropy', optimizer='adam', metrics=[tf.keras.metrics.AUC(curve = 'PR')])
# fitting the modelmodel.fit(X_train, y_train, batch_size=800, validation_data=(X_test, y_test), epochs=24)
# evaluate the model model.evaluate(X_test, y_test, verbose=1) 曲線下的最終精確召回區(qū)域:0.981033創(chuàng)建網(wǎng)絡(luò)攝像頭應(yīng)用程序獲得滿意的模型后,請(qǐng)使用model.save('yourmodelname.h5'). 保存生產(chǎn)模型時(shí),請(qǐng)確保運(yùn)行該模型時(shí)沒有驗(yàn)證數(shù)據(jù),這將在導(dǎo)入時(shí)導(dǎo)致問題。安裝和導(dǎo)入:這些是 Mac 優(yōu)化的,盡管也可以在 Windows 上使用相同的腳本。
else: counter = counter 1 status = ‘Closed’ cv2.putText(frame, status, (round(w/2)-104,70), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 2, cv2.LINE_4) 如果一行中有6幀是閉著眼睛的(“睡眠”),我們還希望顯示一個(gè)警報(bào)。這可以使用一個(gè)簡(jiǎn)單的if語句來完成:
最后,我們需要顯示幀并為 while 循環(huán)提供退出鍵。在cv2.waitKey(1)確定幀的顯示時(shí)間。括號(hào)中的數(shù)字是幀將顯示的毫秒數(shù),除非按下“k”鍵(在本例中為27)或escape鍵:

?正如我們所見,該模型非常有效,盡管訓(xùn)練時(shí)間很長,但仍可在幾毫秒內(nèi)返回預(yù)測(cè)。通過一些進(jìn)一步的改進(jìn)并導(dǎo)出到外部機(jī)器,這個(gè)程序可以很容易地應(yīng)用于實(shí)際情況,甚至可以挽救生命。
# webcam frame is inputted into functiondef eye_cropper(frame):
# create a variable for the facial feature coordinates facial_features_list = face_recognition.face_landmarks(frame)
# create a placeholder list for the eye coordinates # and append coordinates for eyes to list unless eyes # weren't found by facial recognition try: eye = facial_features_list[0]['left_eye'] except: try: eye = facial_features_list[0]['right_eye'] except: return
# establish the max x and y coordinates of the eye x_max = max([coordinate[0] for coordinate in eye]) x_min = min([coordinate[0] for coordinate in eye]) y_max = max([coordinate[1] for coordinate in eye]) y_min = min([coordinate[1] for coordinate in eye])
# establish the range of x and y coordinates x_range = x_max - x_min y_range = y_max - y_min
# in order to make sure the full eye is captured, # calculate the coordinates of a square that has a # 50% cushion added to the axis with a larger range and # then match the smaller range to the cushioned larger range if x_range > y_range: right = round(.5*x_range) x_max left = x_min - round(.5*x_range) bottom = round((((right-left) - y_range))/2) y_max top = y_min - round((((right-left) - y_range))/2) else: bottom = round(.5*y_range) y_max top = y_min - round(.5*y_range) right = round((((bottom-top) - x_range))/2) x_max left = x_min - round((((bottom-top) - x_range))/2)
# crop the image according to the coordinates determined above cropped = frame[top:(bottom 1), left:(right 1)]
# resize the image cropped = cv2.resize(cropped, (80,80)) image_for_prediction = cropped.reshape(-1, 80, 80, 3)
return image_for_prediction
# initiate webcamcap = cv2.VideoCapture(0)w = cap.get(cv2.CAP_PROP_FRAME_WIDTH)h = cap.get(cv2.CAP_PROP_FRAME_HEIGHT)if not cap.isOpened(): raise IOError('Cannot open webcam')
# set a countercounter = 0
# create a while loop that runs while webcam is in usewhile True:
# capture frames being outputted by webcam ret, frame = cap.read()
# use only every other frame to manage speed and memory usage frame_count = 0 if frame_count == 0: frame_count = 1 pass else: count = 0 continue
# function called on the frame image_for_prediction = eye_cropper(frame) try: image_for_prediction = image_for_prediction/255.0 except: continue
# get prediction from model prediction = eye_model.predict(image_for_prediction)
# Based on prediction, display either "Open Eyes" or "Closed Eyes" if prediction < 0.5: counter = 0 status = 'Open'
cv2.rectangle(frame, (round(w/2) - 110,20), (round(w/2) 110, 80), (38,38,38), -1)
cv2.putText(frame, status, (round(w/2)-80,70), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,255,0), 2, cv2.LINE_4) x1, y1,w1,h1 = 0,0,175,75 ## Draw black backgroun rectangle cv2.rectangle(frame, (x1,x1), (x1 w1-20, y1 h1-20), (0,0,0), -1) ## Add text cv2.putText(frame, 'Active', (x1 int(w1/10), y1 int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255,0),2) else: counter = counter 1 status = 'Closed'
cv2.rectangle(frame, (round(w/2) - 110,20), (round(w/2) 110, 80), (38,38,38), -1)
cv2.putText(frame, status, (round(w/2)-104,70), cv2.FONT_HERSHEY_SIMPLEX, 2, (0,0,255), 2, cv2.LINE_4) x1, y1,w1,h1 = 0,0,175,75 ## Draw black backgroun rectangle cv2.rectangle(frame, (x1,x1), (x1 w1-20, y1 h1-20), (0,0,0), -1) ## Add text cv2.putText(frame, 'Active', (x1 int(w1/10), y1 int(h1/2)), cv2.FONT_HERSHEY_SIMPLEX, 0.7, (0, 255,0),2)
# if the counter is greater than 3, play and show alert that user is asleep if counter > 2:
## Draw black background rectangle cv2.rectangle(frame, (round(w/2) - 160, round(h) - 200), (round(w/2) 160, round(h) - 120), (0,0,255), -1) cv2.putText(frame, 'DRIVER SLEEPING', (round(w/2)-136,round(h) - 146), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,0), 2, cv2.LINE_4) cv2.imshow('Drowsiness Detection', frame) k = cv2.waitKey(1) ## Sound playsound('rooster.mov') counter = 1 continue
cv2.imshow('Drowsiness Detection', frame) k = cv2.waitKey(1) if k == 27: breakcap.release()cv2.destroyAllWindows()





