• Home
  • pcDuino
  • WiKi
  • Store
  • Distributors
  • Home
  • pcDuino
  • WiKi
  • Store
  • Distributors
HomepcDuinopcDuino8Motion Detection and Save Video Clip to Dropbox wi ...
Previous Next

Motion Detection and Save Video Clip to Dropbox with pcDuino8 Uno and LinkSprite smartRemoteCamera WiFI Plug and Play Security Camera

Posted by: admin , February 6, 2016

LinkSprite has a great video quality WiFi security camera.  It has excellent night vision too.

We installed one LinkSprite wifi security camera in the front door, and can watch the real time video using the included free mobile APP.

What about  motion detection and save the captured video clip to cloud?

The video clip saved to Dropbox would be nice as Dropbox has PC/MAC client as well as mobile APPs so that we can watch the video clip on PC/MAC and smart phones.

pcDuino8 Uno was chosen to do the motion detection algorithm and upload the captured video clips to dropbox as pcDuino8 Uno has 8 cores that is powerful enough to performance the computation.

On the pcDuino8 Uno, we will use OpenCV to perform the motion detection. The motion detection algorithm is referred to Adrian’s post. To make things easier, LinkSprite releases an image for pcDuino8 Uno that has OpenCV prebuilt.

How can we read the video frame from the LinkSprite wifi security camera? It will use the magic function VideoCapture. The sample code is as below (please note that the smartRemoteCamera and the viewer should be in the same local network):

camera = cv2.VideoCapture("rtsp://admin:admin@192.168.1.19/live1.264'")
(grabbed, frame) = camera.read()

In the code, 192,168.1.19 is the IP address of the Openhapp wifi security camera.  To find out the IP address,  we can use the accompanied mobile APP smartRemoteCamera.

Before we dig into the code, we also need to create a dropbox token.  Assume that you already have a dropbox account, we need to create an APP in dropbox and obtain the token.

Hit the button ‘Generate’ to generate the token. You will need this token in the python script.

Finally, here is the complete python script:

# please change the IP address of the RTSP camera

import argparse
import datetime
import imutils
import time
import cv2
import numpy as np
import dropbox
import os
import urllib2

cam_ipaddress="192.168.1.6"

access_token='your dropbox token'
rtsp_address="rtsp://admin:admin@"+cam_ipaddress+"/live1.264"


client=dropbox.client.DropboxClient(access_token)
print 'linked account:', client.account_info()

#the moving average forgetting parameter
alpha=0.1
isWritingClip=0
sen_thresh=50

# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-v", "--video", help="path to the video file")
ap.add_argument("-a", "--min-area", type=int, default=500, help="minimum area size")
args = vars(ap.parse_args())



camera = cv2.VideoCapture(rtsp_address)


# Define the codec and create VideoWriter object
fourcc = cv2.cv.CV_FOURCC(*'XVID')



# initialize the first frame in the video stream
firstFrame = None

# loop over the frames of the video
i_notgrabbed=0
while True:
	# grab the current frame and initialize the occupied/unoccupied
	# text
	timestr = time.strftime("%Y%m%d-%H%M%S")

        try:
             (grabbed, frame) = camera.read()
        except:
             continue

        try:
            width=np.size(frame,1)
        except:
            continue
	height=np.size(frame,0)
	frameSize=(width,height)


	text = "Unoccupied"

	# if the frame could not be grabbed, then we have reached the end
	# of the video

	if not grabbed:
            i_notgrabbed=i_notgrabbed+1
            print timestr, grabbed, i_notgrabbed
            if i_notgrabbed> 20:
               camera.release()
               try:
                   camera = cv2.VideoCapture(rtsp_address)
               except:
                   continue
               i_notgrabbed=0
	    continue



	OriginalFrame=frame
	# resize the frame, convert it to grayscale, and blur it
	frame = imutils.resize(frame, width=300)
	gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
	gray = cv2.GaussianBlur(gray, (21, 21), 0)

	# if the first frame is None, initialize it
	if firstFrame is None:
		firstFrame = gray
                avg = np.float32(firstFrame)
		continue



	# compute the absolute difference between the current frame and
	# first frame
	frameDelta = cv2.absdiff(firstFrame, gray)
	thresh = cv2.threshold(frameDelta, sen_thresh, 255, cv2.THRESH_BINARY)[1]

	# dilate the thresholded image to fill in holes, then find contours
	# on thresholded image
	thresh = cv2.dilate(thresh, None, iterations=2)
	(cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL,
		cv2.CHAIN_APPROX_SIMPLE)


        cv2.accumulateWeighted(gray,avg,alpha)
        firstFrame = cv2.convertScaleAbs(avg)




	# loop over the contours
	for c in cnts:
		# if the contour is too small, ignore it
		if cv2.contourArea(c) < args["min_area"]:
			continue

		# compute the bounding box for the contour, draw it on the frame,
		# and update the text
		(x, y, w, h) = cv2.boundingRect(c)
		cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
		text = "Occupied"

	if text=='Occupied':
		if isWritingClip==0:
			clipfilename=timestr+'.avi'
			out = cv2.VideoWriter(clipfilename,fourcc, 20.0, frameSize)
			isWritingClip=1
		out.write(OriginalFrame)
		start_time=time.time()

	if text=='Unoccupied':
		if isWritingClip==1:
			elapsed_time=time.time()-start_time
			#after detect a motion, if the inbetween is longer than 60 seconds, two clips are created.
			if elapsed_time>60:
				isWritingClip=0
				out.release()
                                try:
                                    f=open(clipfilename,'rb')
                                    response=client.put_file(clipfilename,f)
                                    print "uploaded:", response
                                    os.remove(clipfilename)
                                except:
                                    os.remove(clipfilename)
                                    continue
			else:
				out.write(OriginalFrame)

    #
	# print isWritingClip



	# draw the text and timestamp on the frame
	cv2.putText(frame, "Room Status: {}".format(text), (10, 20),
	 	cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 255), 2)
	cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %I:%M:%S%p"),
	 	(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.35, (0, 0, 255), 1)
    #
	# # show the frame and record if the user presses a key
	cv2.imshow("Security Feed", frame)
	#cv2.imshow("Thresh", thresh)
	#cv2.imshow("Frame Delta", frameDelta)
	key = cv2.waitKey(1) & 0xFF

	# if the `q` key is pressed, break from the lop
	if key == ord("q"):
		break

# cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()

Share!
Tweet

admin

About the author

Leave a Reply Cancel reply

You must be logged in to post a comment.

Category

  • Home
  • pcDuino
  • WiKi
  • Store
  • Distributors