As we are progressing with our tutorial, our group is already discussing potential applications. We’re very excited to leave the neatly prepared course-datasets and dive into real world stuff.
On a Thursday evening Yoovraj and I met and spontaneously wanted to start a small project. Since Yoovraj is into robotics, he always has an Arduino kit in his backpack. He also had his self-made robotic arm, that he built at the hackathon where we first met 6 months ago. We decided to start our own little hackathon and see what happens.
Introducing: The Tinder Swiper
The idea just popped up. We decided to automate the Tinder selection process and built: The Tinder Swiper.
The Tinder Swiper recognizes faces and swipes automatically right for you, whereas spam and obscure pictures will get dismissed. Here’s how we did it: Since we didn’t have access to the Tinder API we had to find a way to detect the faces from the mobile device. That turned out to be the most difficult problem. I’ll tell you in a bit why.
While Yoovraj was peparaing the hardware, I was writing the code for the face detection (we used OpenCV’s Haar Cascades for that). We connected the robotic arm to Arduino and Arduino and an external webcam to my Computer. The code was written in Python. First, import the libraries, the classifiers (don’t forget to download the .xml files that belong to each classifier) and connect to the port:
import numpy as np
face_cascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
eye_cascade = cv2.CascadeClassifier("haarcascade_eye.xml")
cap = cv2.VideoCapture(1)
s = serial.Serial(port=’/dev/cu.usbmodem1411′,baudrate=9600)
The next block of code starts with a while loop. It’s not the most elegant way to solve the problem (using len()) but we had only 2-3 hours to make it work so we didn’t mind (feel free to improve and adjust). So as explained, the goal was to detect a face (and eyes) on the sreen (<0) and if so, swipe right. If no face is detected (mosts likely spam or obscure pictures that you don’t want anyway) the robotic arm goes left.
ret, img = cap.read()
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
if (len(faces)) == 0:
for (x,y,w,h) in faces:
cv2.rectangle(img, (x,y), (x+w, y+h), (255,0,0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color, (ex,ey), (ex+ew,ey+eh), (0,255,0), 2)
for (x, y, w, h) in faces:
if w > 0 : #— Set the flag True if w>0 (i.e, if face is detected)
face_found = True
face_found = False
if face_found == True:
if face_found == False:
k = cv2.waitKey(30) & 0xff
if k == 27:
It worked in the end, but the difficulties were in catching the face from a mobile screen. So our Tinder Swiper couldn’t recognize faces quite a few times, because of light and reflections and the circumstances in general. But the idea was not bad. 😀
This could be the first automated selection step. The next step could be building a recommender system based on your preferences. But that’s for another Thursday evening.