2016年12月3日 星期六

Facebook Live 遊戲(二)


繼上星期開發的 Facebook Live 遊戲 後,今日再編寫多一個以「抽獎」為題的互動 Facebook Live 遊戲。加入其餘兩個負面的反應,不過所有反應都只有相同動作,就是轉動輪盤。效果不錯,很期待有商戶使用。

2016年12月2日 星期五

Pacess Studio 臉書專頁


幾個月前在 Facebook 建立了一個專頁。專頁以我第一次正式掛牌賣 apps 的公司名稱命名,叫「Pacess Studio」。它是我用過的公司名稱中感覺最好的一個。最近才開始貼文,也邀請了部份朋友關注,得整理得成熟一點。我需要一個正式見得人的商標。過去兩星期學習了一點 Illustrator 的畫法,因此用它製作出 Pacess Studio 的商標。以往我用 Photoshop 來達成,但商標圖案還是用 Illustrator 畫的才夠銳利。

原本圖案如下。後來想給人一點關於「Digital」數碼的感覺,所以才加了線路板的線紋。我很滿意現在的版本,看上去帶點日式味道;而且設計可愛,帶點瑪利奧的感覺,更有點像機械人,非常配合我的範疇。


但看了又看,太多黑白位置,於是把花盤變成紅色。也不錯。夠搶眼,我喜歡:

2016年11月30日 星期三

Raspberry Pi 的 Webcam 影像移位重疊問題


早幾天,我用 Python 編寫了一個偵測移動物的程式。它是在 Raspberry Pi 上配合 Webcam 執行。當發現移動物時,除了把影像儲存外,還會經電郵傳送給我,好讓我知道家中的情況。可是,我經常收到錯誤的警報。原因是 Webcam 傳到 Python 的影像中,有時會上一幀跟下一幀移位重疊。由於動作偵測是以畫面變化來作準,移位重疊令到這個準則成立,觸發誤報。起初以為是 Python 對於處理記憶體物件是以指針來處理,程式工作得慢而導致下一幀已寫進記憶體,可是改用 .copy() 後問題依舊,似乎是鏡頭或是驅動程式的問題。有待解決。
##----------------------------------------------------------------------------------------
##  Motion Detector for Webcam
##----------------------------------------------------------------------------------------
##  Written by Pacess HO
##  Platform : Python3 + OpenCV3
##  Date : 2016.Nov.25
##  Copyright 2016 Pacess Studio.  All rights reserved.
##----------------------------------------------------------------------------------------

import smtplib
import imutils
import time
import cv2

# from msvcrt import getch
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart

##----------------------------------------------------------------------------------------
##  Main program start
print("\nMotion Detector for Webcam\n\n")
print("Warming up, please wait...\n")
camera = cv2.VideoCapture(0)
time.sleep(2)

##  Loop over the frames of the video
skipCount = 0
lastStatus = False
thisStatus = False
masterFrame = None
print("Start capturing...\n")
while True:

   ##----------------------------------------------------------------------------------------
   ##  Grab the current frame
   grabbed = camera.grab()
 
   ##  If the frame could not be grabbed, then we have reached the end of the video
   if not grabbed:
      continue
 
   if skipCount > 0:
      skipCount = skipCount-1
      if skipCount == 0:
         print("Count down completed, resume capture.")
      continue

   ##----------------------------------------------------------------------------------------
   ##  Convert frame to grayscale, and blur it
   flag, frame = camera.retrieve()
   if not flag:
      continue

   currentFrame = frame.copy()
   gray = cv2.cvtColor(currentFrame, cv2.COLOR_BGR2GRAY)
   gray = cv2.GaussianBlur(gray, (21, 21), 0)
 
   # if the first frame is None, initialize it
   if masterFrame is None:
      masterFrame = gray
      continue

   ##----------------------------------------------------------------------------------------
   ##  Compute the absolute difference between the current frame and
   frameDelta = cv2.absdiff(masterFrame, gray)
   threshold = cv2.threshold(frameDelta, 68, 255, cv2.THRESH_BINARY)[1]

   threshold = cv2.dilate(threshold, None, iterations=2)
   (_, contourArray, _) = cv2.findContours(threshold.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) 

   ##  Loop over the contours
   lastStatus = thisStatus
   thisStatus = False
   for contour in contourArray:

      ##  If the contour is too small, ignore it
      difference = cv2.contourArea(contour)
      if difference < 10000:
         continue
 
      thisStatus = True

      ##  Compute the bounding box for the contour
      (x, y, w, h) = cv2.boundingRect(contour)
      cv2.rectangle(currentFrame, (x, y), (x+w, y+h), (0, 255, 0), 2)

   ##----------------------------------------------------------------------------------------
   ##  Save frame and send email alert
   if thisStatus == True:

      masterFrame = gray

      ##  Save frame
      id = time.strftime("%Y%m%d%H%M%S")
      filename = "./capture/capture_"+str(id)+".jpg"
      cv2.imwrite(filename, currentFrame)
      print("Capture: "+filename)

      ##  Sending email notification
      email = MIMEMultipart()
      email["Subject"] = "Raspberry Pi Motion Alert"
      email["From"] = "pacess@pacess.com"
      email["To"] = "pacess@pacess.com"

      with open(filename, "rb") as filePointer:
         image = MIMEImage(filePointer.read())
      email.attach(image)

      smtp = smtplib.SMTP("smtp.mail.yahoo.com", 587)
      smtp.ehlo()
      smtp.starttls()
      smtp.login("pacess@yahoo.com", "12345678")
      smtp.sendmail(email["From"], email["To"], email.as_string())
      smtp.quit()

      ##  Skip some frames to prevent sending too much image
      print("Email sent, wait for a while...")
      skipCount = 100

##----------------------------------------------------------------------------------------
##  Cleanup the camera and close any open windows
camera.release()
cv2.destroyAllWindows()

2016年11月26日 星期六

Facebook Live 遊戲


還在玩 Facebook Live 投票?看看我的 Facebook Live 遊戲!

自從研發了「即時 Facebook 互動投票」後,這兩個星期經常在 Facebook 看到不同的品牌應用起來。我在想,現在已經玩得很爛,已經很悶;下一步可以做些甚麼呢?想到了用戶給予反應可以當作是按鍵,基於這個想法,便可以發展成為遊戲。問題是每個用戶只能按鍵一次,那麼遊戲要如何玩?於是我想到了煙花遊戲。

這個應用應該是香港首個,甚至是全球第一。我急不及待,花了一整天去完成程式及相關的美術後期製作。其實還有其他的想法,但以一人之力去完成編程及繪圖,所花的時間很多;而且這是概念驗證,上面的已經很有水準。

2016年11月23日 星期三

用 Google Prediction API 預測股價

今日試另一樣新東西,用 Google Prediction API 預測股價。我在想,外國應該有很多高人試過,既然沒甚麼消息,想必準繩度沒有提高多少。雖然如此,也想體驗當中樂趣。不過,我運用了過習過的玄學知識滲入其中,外國人不太懂的中國哲理,隨時能讓我比他們更快找到箇中奧妙。

首先編寫了一個 PHP 程式讀出《股票經理》的股票數據,再加入升級了的天干地支 API 數值,生產出 0005.HK 五年的 .csv 資料集。然後把它上傳到 Google Cloud Storage;再把上傳了的 .csv 檔案供給 Google APIs Explorer 建立預測模型。


生成模型需要一點時間,但 Google Cloub Platform 比 IBM Watson 快出幾倍。執行 predition.trainedmodels.get 指令能查看訓練模型的狀態。第一次訓練時出現「trainingStatus: ERROR」錯誤,發現訓練用的 .csv 不能有標籤或檔頭之類東西。


修正過後,重新上傳 .csv 並進行訓練。約十秒時間模型便完成了。


利用「prediction.trainedmodels.predict」指令及參數便能預測結果。由於我的 .csv 包含了價錢、年、月、日、時數據,而第一個必須為預測的內容,亦即是價錢;所以輸入的參數便是年、月、日、時。下圖是當刻的預測結果,跟市價差不多,好像能拿來作為參考。我再做了兩次預測,結果是今日股價下跌到 $59.74,但明天中午回升至 $60.28。能否作準,明天自有結果。好緊張 p(^_^)q

2016年11月22日 星期二

用 Google Cloud Vision API 辨認 Captcha


準備好 Captcha 資料集後,下一步是上傳到 Google Cloud Storage。

接著用 APIs Explorer 進行測試,可惜失敗率很高。看來用 Google Cloud Vision API 來對抗 Captcha 的方法不行,得找另一個方案。

2016年11月21日 星期一

用 Google Cloud Vision API 辨認文字


今日繼續嘗試 Google Cloud Vision API v1 我在 Apple 網頁抓了上面的圖片,把它上傳到 Google Cloud Storage。然後用 APIs Explorer 進行文字辨認。
Request
POST https://vision.googleapis.com/v1/images:annotate?key={YOUR_API_KEY}
{
   "requests": [{
      "features": [{
         "type": "TEXT_DETECTION"
      }],
      "image": {
         "source": {
            "gcsImageUri": "gs://dummy/macbook_pro_en.png"
         }
      }
   }]
}

ResponseResponse
{
   "responses": [{
      "textAnnotations": [{
         "locale": "la",
         "description": "MacBook Pro\nA touch of genius.\n",
         "boundingPoly": {
            "vertices": [{
               "x": 126,
               "y": 47
            }, {
               "x": 410,
               "y": 47
            }, {
               "x": 410,
               "y": 122
            }, {
               "x": 126,
               "y": 122
            }]
         }
      }, {
         "description": "MacBook",
         "boundingPoly": {
            "vertices": [{
               "x": 175,
               "y": 47
            }, {
               "x": 306,
               "y": 47
            }, {
               "x": 306,
               "y": 73
            }, {
               "x": 175,
               "y": 73
            }]
         }
      }, {
         "description": "Pro",
         "boundingPoly": {
            "vertices": [{
               "x": 320,
               "y": 47
            }, {
               "x": 362,
               "y": 47
            }, {
               "x": 362,
               "y": 73
            }, {
               "x": 320,
               "y": 73
            }]
         }
      }, {
         "description": "A",
         "boundingPoly": {
            "vertices": [{
               "x": 126,
               "y": 79
            }, {
               "x": 141,
               "y": 79
            }, {
               "x": 141,
               "y": 122
            }, {
               "x": 126,
               "y": 122
            }]
         }
      }, {
         "description": "touch",
         "boundingPoly": {
            "vertices": [{
               "x": 157,
               "y": 79
            }, {
               "x": 243,
               "y": 79
            }, {
               "x": 243,
               "y": 122
            }, {
               "x": 157,
               "y": 122
            }]
         }
      }, {
         "description": "of",
         "boundingPoly": {
            "vertices": [{
               "x": 258,
               "y": 79
            }, {
               "x": 286,
               "y": 79
            }, {
               "x": 286,
               "y": 122
            }, {
               "x": 258,
               "y": 122
            }]
         }
      }, {
         "description": "genius.",
         "boundingPoly": {
            "vertices": [{
               "x": 296,
               "y": 79
            }, {
               "x": 410,
               "y": 79
            }, {
               "x": 410,
               "y": 122
            }, {
               "x": 296,
               "y": 122
            }]
         }
      }]
   }]
}

英文文字認得不錯,那中文又如何?我也試了一次。效果很好:
Request
POST https://vision.googleapis.com/v1/images:annotate?key={YOUR_API_KEY}
{
   "requests": [{
      "features": [{
         "type": "TEXT_DETECTION"
      }],
      "image": {
         "source": {
            "gcsImageUri": "gs://dummy/macbook_pro_tc.png"
         }
      }
   }]
}

Response
{
   "responses": [{
      "textAnnotations": [{
         "locale": "zh-Hant",
         "description": "MacBook Pro\n天才橫溢,一觸而發\n亢。\no\n",
         "boundingPoly": {
            "vertices": [{
               "x": 110,
               "y": 52
            }, {
               "x": 426,
               "y": 52
            }, {
               "x": 426,
               "y": 115
            }, {
               "x": 110,
               "y": 115
            }]
         }
      }, {
         "description": "MacBook",
         "boundingPoly": {
            "vertices": [{
               "x": 171,
               "y": 52
            }, {
               "x": 300,
               "y": 52
            }, {
               "x": 300,
               "y": 75
            }, {
               "x": 171,
               "y": 75
            }]
         }
      }, {
         "description": "Pro",
         "boundingPoly": {
            "vertices": [{
               "x": 311,
               "y": 52
            }, {
               "x": 355,
               "y": 52
            }, {
               "x": 355,
               "y": 75
            }, {
               "x": 311,
               "y": 75
            }]
         }
      }, {
         "description": "天才",
         "boundingPoly": {
            "vertices": [{
               "x": 110,
               "y": 82
            }, {
               "x": 176,
               "y": 82
            }, {
               "x": 176,
               "y": 115
            }, {
               "x": 110,
               "y": 115
            }]
         }
      }, {
         "description": "橫溢",
         "boundingPoly": {
            "vertices": [{
               "x": 178,
               "y": 82
            }, {
               "x": 244,
               "y": 82
            }, {
               "x": 244,
               "y": 114
            }, {
               "x": 178,
               "y": 114
            }]
         }
      }, {
         "description": ",",
         "boundingPoly": {
            "vertices": [{
               "x": 254,
               "y": 96
            }, {
               "x": 258,
               "y": 96
            }, {
               "x": 258,
               "y": 104
            }, {
               "x": 254,
               "y": 104
            }]
         }
      }, {
         "description": "一",
         "boundingPoly": {
            "vertices": [{
               "x": 277,
               "y": 96
            }, {
               "x": 307,
               "y": 96
            }, {
               "x": 307,
               "y": 99
            }, {
               "x": 277,
               "y": 99
            }]
         }
      }, {
         "description": "觸",
         "boundingPoly": {
            "vertices": [{
               "x": 311,
               "y": 83
            }, {
               "x": 342,
               "y": 83
            }, {
               "x": 342,
               "y": 114
            }, {
               "x": 311,
               "y": 114
            }]
         }
      }, {
         "description": "而",
         "boundingPoly": {
            "vertices": [{
               "x": 345,
               "y": 84
            }, {
               "x": 376,
               "y": 84
            }, {
               "x": 376,
               "y": 114
            }, {
               "x": 345,
               "y": 114
            }]
         }
      }, {
         "description": "發",
         "boundingPoly": {
            "vertices": [{
               "x": 380,
               "y": 83
            }, {
               "x": 410,
               "y": 83
            }, {
               "x": 410,
               "y": 114
            }, {
               "x": 380,
               "y": 114
            }]
         }
      }, {
         "description": "亢",
         "boundingPoly": {
            "vertices": [{
               "x": 395,
               "y": 91
            }, {
               "x": 409,
               "y": 92
            }, {
               "x": 409,
               "y": 102
            }, {
               "x": 395,
               "y": 101
            }]
         }
      }, {
         "description": "。",
         "boundingPoly": {
            "vertices": [{
               "x": 419,
               "y": 94
            }, {
               "x": 426,
               "y": 94
            }, {
               "x": 426,
               "y": 102
            }, {
               "x": 419,
               "y": 102
            }]
         }
      }, {
         "description": "o",
         "boundingPoly": {
            "vertices": [{
               "x": 419,
               "y": 95
            }, {
               "x": 425,
               "y": 95
            }, {
               "x": 425,
               "y": 103
            }, {
               "x": 419,
               "y": 103
            }]
         }
      }]
   }]
}

2016年11月20日 星期日

準備 Captcha 資料集


2015 年 2 月,嘗試過 Captcha 解碼,但失敗了。最近不斷吸收機器學習方面的知識,坊間也有用 Google Cloud Vision API 對 Google Captcha 的方案,我也想試試自己在這方面的能力。但在這之前,我需要有一些 Captcha 圖案及標籤,用來訓練大腦。最簡單的方法就是用程式來自行生產。我用 PHP 寫了兩個程序:

第一個是生成 Captcha 圖像及標籤的程式:
<?php
//----------------------------------------------------------------------------------------
//  Captcha Generator
//----------------------------------------------------------------------------------------
//  Platform: CentOS7 + PHP + Apache
//  Written by Pacess HO
//  Copyright 2016 Pacess Studio.  All rights reserved.
//----------------------------------------------------------------------------------------

header("Access-Control-Allow-Origin: https://home.pacess.com");
header("Access-Control-Allow-Methods: POST");

date_default_timezone_set("Asia/Hong_Kong");
mb_internal_encoding("UTF-8");
ini_set("memory_limit", "-1");
set_time_limit(0);

session_start();

//----------------------------------------------------------------------------------------
require_once "./securimage/securimage.php";

//========================================================================================
if ($_REQUEST["code"] == "1")  {

   header("Content-Type: text/html");
   $codeArray = $_SESSION["securimage_code_disp"];
   echo("Captcha:".$codeArray["default"]);
   exit(0);
}

$securimage = new Securimage();
$securimage->show();

?>

第二個是讀取圖像並把它以標籤作為檔名的程式:
<?php
//----------------------------------------------------------------------------------------
//  Captcha Dataset Generator
//----------------------------------------------------------------------------------------
//  Platform: CentOS7 + PHP + Apache
//  Written by Pacess HO
//  Copyright 2016 Pacess Studio.  All rights reserved.
//----------------------------------------------------------------------------------------

header("Content-type: text/html");
header("Cache-Control: no-cache, must-revalidate");
header("Expires: Tue, 10 Mar 1987 00:00:00 GMT");

date_default_timezone_set("Asia/Hong_Kong");
mb_internal_encoding("UTF-8");
ini_set("memory_limit", "-1");
set_time_limit(0);

//----------------------------------------------------------------------------------------
$path = "./files/";
$cookieFile = $path."_cookie.txt";
$count = 1;

//========================================================================================
//  Main program
if (isset($_REQUEST["count"]))  {$count = intval($_REQUEST["count"]);}
for ($i=0; $i<$count; $i++)  {

   //  Get a Captcha image
   $curl = curl_init();
   curl_setopt($curl, CURLOPT_URL, "http://sitachan.local/captcha/getCode.php");
   curl_setopt($curl, CURLOPT_POST, 1);
   curl_setopt($curl, CURLOPT_POSTFIELDS, "code=0");
   curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
   curl_setopt($curl, CURLOPT_COOKIEJAR, $cookieFile); 
   curl_setopt($curl, CURLOPT_COOKIEFILE, $cookieFile); 
   $pngContent = curl_exec($curl);
   curl_close($curl);

   //  Get a Captcha value
   $curl = curl_init();
   curl_setopt($curl, CURLOPT_URL, "http://sitachan.local/captcha/getCode.php");
   curl_setopt($curl, CURLOPT_POST, 1);
   curl_setopt($curl, CURLOPT_POSTFIELDS, "code=1");
   curl_setopt($curl, CURLOPT_RETURNTRANSFER, 1);
   curl_setopt($curl, CURLOPT_COOKIEJAR, $cookieFile); 
   curl_setopt($curl, CURLOPT_COOKIEFILE, $cookieFile); 
   $string = curl_exec($curl);
   curl_close($curl);

   //  String: "Captcha: nYY6FF"
   $array = explode(":", $string);
   $code = $array[1];
   if (strlen($code) == 0)  {$code = "default";}
   $filename = $code.".png";

   //  Save image
   $filePath = $path.$filename;
   $file = fopen($filePath, "w");
   fwrite($file, $pngContent);
   fclose($file);

   //----------------------------------------------------------------------------------------
   //  Output
   echo("<img src='$filePath' />");
   echo("Image size: ".strlen($pngContent));
   echo("String: $string");
   echo("Filename: $filename");
}
?>

2016年11月19日 星期六

初試 Google Cloud Natural Language API


除了 Google Prediction API 能依照留言分類成正評及負評外,Google 還有 Google Cloud Natural Language API 可以達成。用 APIs Explorer 進行測試,可惜暫時還未支援正體中文...。

2016年11月15日 星期二

用 Google Prediction API 來判斷留言者的情緒


繼 IBM Watson 後,今日嘗試了另一個 Machine Learning Framework: Google Preduction API。同樣地,希望能用它來做 Sentiment Analysis,判斷用戶的留言是正評,還是負評。要測試 Google Prediction API 有六個步驟:

1. 在 Google Cloud Console 啟動 Google Prediction API

2. 準備訓練數據。我選用了 https://inclass.kaggle.com/c/si650winter11/data 內的資料。

3. 上傳到 Google Cloud Storage

4. 執行訓練程序:
Request
POST https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels?key={YOUR_API_KEY}
{
   "id": "sentiment",
   "storageDataLocation": "dummy-c15ed.appspot.com/sentiment_training.txt"
}
 
Response
{
   "kind": "prediction#training",
   "id": "sentiment",
   "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment",
   "storageDataLocation": "dummy-c15ed.appspot.com/sentiment_training.txt"
}

5. 檢查訓練狀態:
Request
GET https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment?key={YOUR_API_KEY}
 
Response
{
   "kind": "prediction#training",
   "id": "sentiment",
   "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment",
   "created": "2016-11-15T07:15:34.690Z",
   "trainingStatus": "RUNNING"
}

直至完成:
Request
GET https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment?key={YOUR_API_KEY}
 
Response
{
   "kind": "prediction#training",
   "id": "sentiment",
   "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment",
   "created": "2016-11-15T07:15:34.690Z",
   "trainingComplete": "2016-11-15T07:16:26.026Z",
   "modelInfo": {
      "numberInstances": "7085",
      "modelType": "classification",
      "numberLabels": "2",
      "classificationAccuracy": "0.98"
   },
   "trainingStatus": "DONE"
}

6. 輸入新的留言進行測試:
Request
POST https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment/predict?key={YOUR_API_KEY}
{
   "input": {
      "csvInstance": [
         "This is really a poor product, waste my time!"
      ]
   }
}
 
Response
{
   "kind": "prediction#output",
   "id": "sentiment",
   "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment/predict",
   "outputLabel": "Negative",
   "outputMulti": [
      {
         "label": "Positive",
         "score": "0.353047"
      }, {
         "label": "Negative",
         "score": "0.646953"
      }
   ]
}
Request
POST https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment/predict?key={YOUR_API_KEY}
{
   "input": {
      "csvInstance": [
         "Pretty cool!  I love it!"
      ]
   }
}
 
Response
{
   "kind": "prediction#output",
   "id": "sentiment",
   "selfLink": "https://www.googleapis.com/prediction/v1.6/projects/dummy-c15ed/trainedmodels/sentiment/predict",
   "outputLabel": "Positive",
   "outputMulti": [
      {
         "label": "Positive",
         "score": "0.998411"
      }, {
         "label": "Negative",
         "score": "0.001589"
      }
   ]
}

由於使用的是英語素材進行訓練,所以測試也要以英文進行。看來效果不錯。下一步是要找出中文素材。