2019年8月28日 星期三

解決 Ubuntu 18 的 VNC 連線問題


最近在公司架設了一台 Ubuntu 伺服器,希望它可以跑 Web Server、PHP、在 Jupyter 執行 Python 及 PHP、Proxy 代理服務器、訓練機器學習模型,還有 VNC。然而,在 Ubuntu 18 下打開了屏幕分享,可是在 Mac 環境下連接卻出現版本錯誤及加密方法不被支援。問題是 Ubuntu 18 端的介面卻沒有加密選項。最後找到一個名為「dconf-editor」的工具,能顯示系統沒有公開的設定,只要把「require-encryption」關掉後,問題便能解決。

2019年3月2日 星期六

連接 PSVR 到 MacBook Pro


一兩個星期前,在 PS4 Pro 上安裝了 Littlstar 軟件去播放儲存在 USB 的立體影片。豈料今天在自動更新過後,這個免費的軟件現在只能播放兩分鐘,若要播放完整影片,則需要以訂購方式每月付費,或一次過給 US$39.9。對於 Littlstar 這個吸金方法,讀取 USB 影片要收費、官方內容又太少的情況下,很多人亦因此離開,尋找其他方法。其中一個方法是把 PSVR 連接到 macOS 上看。

方法是利用 https://github.com/emoRaivis/MacMorpheus 這個開源程式。先開著 PS4 Pro 及 PSVR,然後如上圖把線重新連接,這樣就能把 MacBook Pro 的畫面投射到 PSVR 上。

2019年3月1日 星期五

利用 SC-FEGAN 修改相片


今日掃 Facebook 時看到朋友分享了一則貼文,利用 SC-FEGAN 修改相片。當中包含了源代碼網址,於是我嘗試一下。作者同時分享了模型檔案,不大,只有 372MB。整個安裝過程非常順暢,在 macOS X 下也能跑。運算一張相片,在沒有 GPU 的情況下,只花 3.5 秒,效果也神奇地好。我把程式修改了,顯示三張圖案。左面是原圖,中間是修改圖,右面是運算結果。

2019年1月5日 星期六

用機器學習改善相片美感


我喜歡拍攝硬照,從不同角度看事物。不過,有時都會失手。無論構圖幾好,光暗調不好,對焦拿不準,相片都會失色。

原來利用機器學習的 Residual Convolutional Neural Network 算法可以改善相片美感。利用一組由單鏡反光機拍攝的相片,加一組由普通相機拍攝的相片,建立成訓練資料,或利用了 DSLR Photo Enhancement Dataset。經過三種 CNN 處理:顏色失真度計算、紋理失真度計算、內容失真度計算;便能調節相片美感,效果不錯!一張 2592x1936 的相片,在 2.5 GHz Intel Core i5 執行 4 分鐘完成。左面是原圖,右面是處理後的結果。

2018年12月29日 星期六

Tello 普通版編程


聖誕假期間,看到 Facebook 內的 DJI 廣告,發現一台入門級航拍機只需要 HK$619,實在很吸引。在航拍機未興起前已經想組裝一台,但一直沒有行動。之前都見過有價錢在 HK$400-$500 間的航拍機,不過質素及外型都不太好。現在 DJI 這台完全能滿足這兩點要求,只花幾百就買得到,於是買了一台回來。


Tello 的操作簡單,用手機 App 便能控制。規格中指飛行距離可達 100 米,而我嘗試過用 iPhone XR 或 iPad Pro 都只能去到 30 米左右,向上飛只能達三層樓高左右。旗艦店職員說是因為手機內的 WiFi 間片細,訊號接收力弱所導致;要是用遙控器則會好多了。我希望用電腦加帶天線的外置 WiFi USB 來控制 Tello,於是研究了一下,發現了 TelloPy 這個 Python 模組。編寫了一個簡單的測試程式,讓 Tello 起飛、拍照、降落。
##----------------------------------------------------------------------------------------
##  Tello DEMO Program
##----------------------------------------------------------------------------------------
##  Platform: Python 3.6 + TelloPy
##  Written by Pacess HO
##  Copyright Pacess Studio, 2018.  All rights reserved
##----------------------------------------------------------------------------------------

from time import sleep
import tellopy

##  Global variable
_counter = 0

##----------------------------------------------------------------------------------------
def handler(event, sender, data, **args):
   global _counter
   drone = sender
   
   if event is drone.EVENT_FLIGHT_DATA:
      print(data)
   
   if event is drone.EVENT_FILE_RECEIVED:
      _counter = _counter+1
      path = "tello_%s.jpg" % str(_counter)
      with open(path, "wb") as file:
         file.write(data)

##----------------------------------------------------------------------------------------
def flyNow():
   drone = tellopy.Tello()
   try:
      drone.subscribe(drone.EVENT_FLIGHT_DATA, handler)
      drone.subscribe(drone.EVENT_FILE_RECEIVED, handler)

      drone.connect()
      drone.wait_for_connection(60.0)
      
      drone.takeoff()
      sleep(3)
      
      drone.take_picture()
      sleep(3)
      
      drone.land()
      sleep(3)
   
   except Exception as ex:
      print(ex)
   
   finally:
      drone.quit()

##----------------------------------------------------------------------------------------
if __name__ == '__main__':
   flyNow()

2018年12月25日 星期二

用 TensorFlow + PoseNet 偵測骨骼位置


最近女兒參加了跳舞比賽,心想怎樣能用影像判斷出骨骼位置,從而收集動作數據呢?在網上找到以 TensorFlow + PoseNet 可以做到。以下是用 Javascript 編寫了一個簡單的偵測程序:
<html>
   <head>
      <script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs"></script>
      <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/posenet"></script>
   </head>

   <body>
      <canvas id="canvas"></canvas>
      <img id="photo" src="./sport_01.jpg" style="display:none;" />

      <script>
         var image = document.getElementById("photo");
         var imageScaleFactor = 0.2;
         var flipHorizontal = false;
         var outputStride = 16;

         //----------------------------------------------------------------------------------------
         function drawConnection(context, keypoints, partA, partB)  {
            var radius = 8;
            var partAPoint = null;
            var partBPoint = null;
            for (var i=0; i<keypoints.length; i++)  {

               var element = keypoints[i];
               var part = element.part;
               if (part != partA && part != partB)  {continue;}

               //  Either matches part A or part B
               if (part == partA)  {partAPoint = element.position;}
               if (part == partB)  {partBPoint = element.position;}

               //  Continue if not both position have been set
               if (partAPoint == null || partBPoint == null)  {continue;}

               //  Both parts are ready, connect them
               context.beginPath();
               context.arc(partAPoint.x, partAPoint.y, radius, 0, 2*Math.PI, false);
               context.fillStyle = 'green';
               context.fill();

               context.beginPath();
               context.moveTo(partAPoint.x, partAPoint.y);
               context.lineTo(partBPoint.x, partBPoint.y);
               context.strokeStyle = 'green';
               context.stroke();

               context.beginPath();
               context.arc(partBPoint.x, partBPoint.y, radius, 0, 2*Math.PI, false);
               context.fillStyle = 'green';
               context.fill();
            }
         }

         //----------------------------------------------------------------------------------------
         function drawConnection12(context, keypoints, partA, partB, partC)  {
            var radius = 8;
            var partAPoint = null;
            var partBPoint = null;
            var partCPoint = null;
            for (var i=0; i<keypoints.length; i++)  {

               var element = keypoints[i];
               var part = element.part;
               if (part != partA && part != partB && part != partC)  {continue;}

               //  Either matches part A or part B or part C
               if (part == partA)  {partAPoint = element.position;}
               if (part == partB)  {partBPoint = element.position;}
               if (part == partC)  {partCPoint = element.position;}

               //  Continue if not both position have been set
               if (partAPoint == null || partBPoint == null || partCPoint == null)  {continue;}

               var pointX = (partBPoint.x+partCPoint.x)/2;
               var pointY = (partBPoint.y+partCPoint.y)/2;

               //  Both parts are ready, connect them
               context.beginPath();
               context.arc(partAPoint.x, partAPoint.y, radius, 0, 2*Math.PI, false);
               context.fillStyle = 'green';
               context.fill();

               context.beginPath();
               context.moveTo(partAPoint.x, partAPoint.y);
               context.lineTo(pointX, pointY);
               context.strokeStyle = 'green';
               context.stroke();
            }
         }

         //----------------------------------------------------------------------------------------
         function drawConnection22(context, keypoints, partA, partB, partC, partD)  {
            var radius = 8;
            var partAPoint = null;
            var partBPoint = null;
            var partCPoint = null;
            var partDPoint = null;
            for (var i=0; i<keypoints.length; i++)  {

               var element = keypoints[i];
               var part = element.part;
               if (part != partA && part != partB && part != partC && part != partD)  {continue;}

               //  Either matches part A or part B or part C or part D
               if (part == partA)  {partAPoint = element.position;}
               if (part == partB)  {partBPoint = element.position;}
               if (part == partC)  {partCPoint = element.position;}
               if (part == partD)  {partDPoint = element.position;}

               //  Continue if not both position have been set
               if (partAPoint == null || partBPoint == null || partCPoint == null || partDPoint == null)  {continue;}

               var pointX1 = (partAPoint.x+partBPoint.x)/2;
               var pointY1 = (partAPoint.y+partBPoint.y)/2;
               var pointX2 = (partCPoint.x+partDPoint.x)/2;
               var pointY2 = (partCPoint.y+partDPoint.y)/2;

               //  Both parts are ready, connect them
               context.beginPath();
               context.arc(pointX1, pointY1, radius, 0, 2*Math.PI, false);
               context.fillStyle = 'green';
               context.fill();

               context.beginPath();
               context.moveTo(pointX1, pointY1);
               context.lineTo(pointX2, pointY2);
               context.strokeStyle = 'green';
               context.stroke();
            }
         }

         //----------------------------------------------------------------------------------------
         posenet.load().then(function(net)  {
            return net.estimateSinglePose(image, imageScaleFactor, flipHorizontal, outputStride);
         }).then(function(pose)  {

            var width = image.width;
            var height = image.height;

            var canvas = document.getElementById("canvas");
            canvas.width = width;
            canvas.height = height;

            var context = canvas.getContext("2d");
            context.drawImage(image, 0, 0);

            var keypoints = pose.keypoints;
            drawConnection(context, keypoints, "leftEye", "rightEye");
            drawConnection(context, keypoints, "leftEye", "nose");
            drawConnection(context, keypoints, "rightEye", "nose");

            drawConnection(context, keypoints, "leftEar", "leftEar");
            drawConnection(context, keypoints, "rightEar", "rightEar");

            drawConnection(context, keypoints, "leftShoulder", "rightShoulder");
            drawConnection(context, keypoints, "leftShoulder", "leftElbow");
            drawConnection(context, keypoints, "rightShoulder", "rightElbow");
            drawConnection(context, keypoints, "leftElbow", "leftWrist");
            drawConnection(context, keypoints, "rightElbow", "rightWrist");

            drawConnection(context, keypoints, "leftHip", "rightHip");
            drawConnection(context, keypoints, "leftHip", "leftKnee");
            drawConnection(context, keypoints, "rightHip", "rightKnee");
            drawConnection(context, keypoints, "leftKnee", "leftAnkle");
            drawConnection(context, keypoints, "rightKnee", "rightAnkle");

            drawConnection12(context, keypoints, "nose", "leftShoulder", "rightShoulder");
            drawConnection22(context, keypoints, "leftShoulder", "rightShoulder", "leftHip", "rightHip");
         });
      </script>
   </body>
</html>

2018年11月29日 星期四

「智泉拾叁」創作


2018 年 11 月 22 日,「智泉 13」正式開始。距離上一次的體驗式課程,已經相隔有 17 年左右。如同 19 年前參與「IN117」一樣,我也一起設計團隊標誌及製服;同時也發揮想像力,設計一些美術作品。其中一樣作品,是希望用元祖太極圖案,加上中文字「拾叁」來創作。如果一個一個地畫出太極實在很耗時,作為程式員,這個情景可以幫得上忙。


要製作出這樣的效果,首先要準備一張遮罩圖。黑色代表繪畫太極的空間,白色代表留白的地方。把遮罩圖黑色地方記下,然後在這些地方隨機生成太極的圖案。我希望把整個過程以動畫方式呈現,所以加入 Circle 類別用來處理太極由小變大的過程;並且把每一步驟的幀記錄下來。最後,以 ffmpeg 指令「ffmpeg -framerate 60 -i out_%04d.png -s:v 1024x550 -c:v libx264 -profile:v high -crf 20 -pix_fmt yuv420p w13.mp4」生成影片。
<?php
//----------------------------------------------------------------------------------------
//  Packing Circle with Mask Image
//----------------------------------------------------------------------------------------
//  Platform: macOS Mojave + PHP5
//  Written by Pacess HO
//  Copyrights Pacess Studio, 2018.  All rights reserved.
//----------------------------------------------------------------------------------------

class Circle  {
   private $isGrowing = true;

   public $x = 0;
   public $y = 0;
   public $r = 3.0;

   //----------------------------------------------------------------------------------------
   function setup($x, $y)  {
      $this->x = $x;
      $this->y = $y;
      $this->r = 3.0;
   }

   //----------------------------------------------------------------------------------------
   function draw($image)  {
      $foreground = imagecolorallocate($image, 0, 0, 0);
      imageellipse($image, $this->x, $this->y, $this->r*2, $this->r*2, $foreground);
   }

   //----------------------------------------------------------------------------------------
   function grow()  {
      if ($this->isGrowing == false)  {return;}
      $this->r++;

      if ($this->r > 30)  {$this->isGrowing = false;}
   }

   //----------------------------------------------------------------------------------------
   function stopGrow()  {$this->isGrowing = false;}

   //----------------------------------------------------------------------------------------
   function isEdge($width, $height)  {
      if (($this->x-$this->r) < 0)  {return true;}
      if (($this->y-$this->r) < 0)  {return true;}
      if (($this->x+$this->r) > $width)  {return true;}
      if (($this->y+$this->r) > $height)  {return true;}
      return false;
   }
}

//========================================================================================
//  Main program
$_width = 1024;
$_height = 768;

//  Loading mask image
list($_width, $_height) = getimagesize("w13.png");
$backgroundImage = imagecreatefrompng("w13.png");

//  Convert mask into spot array
$spotArray = array();
for ($y=0; $y<$_height; $y++)  {
   for ($x=0; $x<$_width; $x++)  {
      $color = imagecolorat($backgroundImage, $x, $y);
      $blue = $color&255;
      if ($blue >= 80)  {continue;}

      $spotArray[] = array($x, $y);
   }
}

//----------------------------------------------------------------------------------------
//  Create logo animation
$max = 99999;
$_array = array();
for ($i=0; $i<1000; $i++)  {

   //  New circle
   for ($j=0; $j<5; $j++)  {

      $valid = true;
      $value = rand(0, count($spotArray));
      $spot = $spotArray[$value];
      $x = $spot[0];
      $y = $spot[1];

      $newCircle = new Circle();
      $newCircle->setup($x, $y);
      foreach ($_array as $circle)  {

         $distance = sqrt(pow($circle->x-$newCircle->x, 2)+pow($circle->y-$newCircle->y, 2));
         if ($distance < ($circle->r+$newCircle->r+2))  {$valid = false;}
      }

      if ($valid == true && $max > 0)  {
         $_array[] = $newCircle;
         $max--;
      }
   }

   //  Create image
   $image = imagecreatetruecolor($_width, $_height);
   $foreground = imagecolorallocate($image, 0, 0, 0);
   $background = imagecolorallocate($image, 255, 255, 255);
   imagefilledrectangle($image, 0, 0, $_width, $_height, $background);
   foreach ($_array as $circle)  {

      $boolean = $circle->isEdge($_width, $_height);
      if ($boolean == true)  {$circle->stopGrow();}

      $x = $circle->x;
      $y = $circle->y;
      $r = $circle->r;
      imageellipse($image, $x, $y, $r*2, $r*2, $foreground);

      //  Overlapping
      $overlapping = false;
      foreach ($_array as $circle2)  {

         if ($circle == $circle2)  {continue;}
         $distance = sqrt(pow($circle->x-$circle2->x, 2)+pow($circle->y-$circle2->y, 2));
         if ($distance < ($circle->r+$circle2->r+2))  {$circle->stopGrow();}
      }

      $circle->grow();
   }

   $filename = sprintf("out_%04d.png", $i);
   imagepng($image, $filename);
   imagedestroy($image);
}
?>

2018年11月10日 星期六

準備 WhatsApp 貼紙格式


最近 WhatsApp 推出貼紙功能,特別之處是貼紙不是由內部的商店下載,而是用外部 App 加入。WhatsApp 提供了參考程式,方便大眾自行加入貼紙。不過,貼紙必須為 512x512 像素 PNG 或 WEBP 格式。

我有一些貼紙不是這個格式,於是編寫了 PHP 程式做準備工作:
<?php
//----------------------------------------------------------------------------------------
//  Create a square base transparent image
//----------------------------------------------------------------------------------------
//  Platform: macOS Mojave + PHP
//  Written by Pacess HO
//  Copyright Pacess Studio, 2018.  All rights reserved.
//----------------------------------------------------------------------------------------

$width = 512;
$height = 512;

//----------------------------------------------------------------------------------------
$fileArray = scandir("./");
foreach ($fileArray as $filename)  {

   //  Skip directories
   if ($filename == ".")  {continue;}
   if ($filename == "..")  {continue;}

   $index = strpos($filename, ".png");
   if ($index == false)  {continue;}

   //  This is a PNG file, create output image
   echo("Processing $filename...\n");
   $outputImage = imagecreatetruecolor($width, $height);
   imagealphablending($outputImage, false);
   $color = imagecolorallocatealpha($outputImage, 255, 255, 255, 127);
   imagefilledrectangle($outputImage, 0, 0, $width, $height, $color);
   imagealphablending($outputImage, true);

   //  Get image size
   $size = getimagesize($filename);
   $imageWidth = $size[0];
   $imageHeight = $size[1];

   //  Calculate zoom scale
   $scaleW = $width/$imageWidth;
   $scaleH = $height/$imageHeight;

   $scale = $scaleW;
   if ($scaleW > $scaleH)  {$scale = $scaleH;}
   $zoomWidth = intval($imageWidth*$scale);
   $zoomHeight = intval($imageHeight*$scale);

   //  Put image to output image
   $x = intval(($width-$zoomWidth)/2);
   $y = intval(($height-$zoomHeight)/2);

   $stickerImage = imagecreatefrompng($filename);
   if ($stickerImage == null)  {continue;}
   imagecopyresampled($outputImage, $stickerImage, $x, $y, 0, 0, $zoomWidth, $zoomHeight, $imageWidth, $imageHeight);

   imagealphablending($outputImage, false);
   imagesavealpha($outputImage, true);
   imagepng($outputImage, "_".$filename);
   imagedestroy($outputImage);
}
?>

2018年11月5日 星期一

用 Python 修正相片日期


上星期參加了 WTIA 舉辦的「2018 北台灣物聯網投資合作商機考察參訪團」。帶了相機及手機拍攝活動相片。回到香港,才發現相機在 8 月到名古屋時調快了一小時,於是編寫以下 Python 程式,讀取相片中的 EXIF 資料,把所有 Canon 拍攝的相片日期都調慢一小時,變回正確時間。
##----------------------------------------------------------------------------------------
##  Fix Photo Creation Date
##----------------------------------------------------------------------------------------
##  Platform: macOS Mojave + Python 2.7
##  Copyrights Pacess Studio, 2018.  All rights reserved.
##----------------------------------------------------------------------------------------

import os
import time
import exifread

##----------------------------------------------------------------------------------------
##  Global variables
_path = "./"

##----------------------------------------------------------------------------------------
##  Get files from directory
for root, dirs, files in os.walk(_path):

   for file in files:
   
      if file.startswith("."):
         continue

      if not file.endswith(".JPG"):
         continue

      print("\nProcessing "+file+"...", end="")

      ##----------------------------------------------------------------------------------------
      ##  Get EXIF
      handle = open(_path+file, "rb")
      tags = exifread.process_file(handle)
   
      machine = str(tags["Image Make"])
      print(machine, end="")

      ##  Process only if "Canon"
      if "Canon" not in machine:
         continue

      ##----------------------------------------------------------------------------------------
      ##  Subtract one hour
      #datetime = os.path.getmtime(file)
      #timeString = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(datetime))
      timeString = str(tags["Image DateTime"])
      datetime = time.mktime(time.strptime(timeString, "%Y:%m:%d %H:%M:%S"))

      newDatetime = datetime-(60*60)
      newTimeString = time.strftime("%Y-%m-%d %H:%M:%S", time.gmtime(newDatetime))
      
      os.utime(_path+file, (newDatetime, newDatetime))
      print(" ("+timeString+" => GMT:"+newTimeString+")", end="")

print("\nDone\n")

2018年10月8日 星期一

把 Darknet 模型轉換成 CoreML 模型

要把 Darknet 模型轉換成 CoreML 模型,先用 Darkflow 把權重儲存成 TensorFlow PB 檔:
$ ./flow --model yolo-c3.cfg --load yolo-c3.weights --savepb
然後以 tfcoreml 做轉換。由於 tfcoreml 使用到不同的軟件版本組合,所以最好是用 Conda 之類的虛擬環境把軟件獨立出來:
$ git clone https://github.com/tf-coreml/tf-coreml.git
$ cd tf-coreml/
$ conda create --name tf-coreml python=3.6
$ source activate tf-coreml
$ pip install -e .
安裝好所需軟件版本後,下一步是正式轉換。把 darkflow/built_graph/yolo-c3.pb 拷到 tf-coreml 目錄,並進入 Python:
$ python
輸入以下 Python 程序。留意把下面「kerasModelPath」的值改為自己的 PB 檔路徑:
import tfcoreml as tf_converter
import tensorflow as tf

##----------------------------------------------------------------------------------------
##  We load the protobuf file from the disk and parse it to retrieve the unserialized graph_def
def load_graph(frozen_graph_filename):
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        
    # Then, we import the graph_def into a new Graph and return it 
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(graph_def, name="")
    return graph

##----------------------------------------------------------------------------------------
##  Load Keras model
kerasModelPath = 'yolo-c3.pb'
graph = load_graph(kerasModelPath)
for op in graph.get_operations(): 
    print (op.name)

##----------------------------------------------------------------------------------------
##  Convert Keras model to Core ML model
##  output_feature_names: the output node name we get from the previouse step
##  image_input_names: CoreML allows image as the input, the only thing we need to do is to set which node is the image input node 
##  input_name_shape_dict: the input node name we get from the previous step, and check the cfg file to know the exact input shape size
##  is_bgr: the channel order is by BGR instead of RGB
##  image_scale: the weights is already normalized in the range from 0 to 1
coreml_model = tf_converter.convert(tf_model_path=kerasModelPath, mlmodel_path='yolo.mlmodel', output_feature_names=['grid'], image_input_names= ['image'], input_name_shape_dict={'image': [1, 416, 416, 3]}, is_bgr=True, image_scale=1/255.0)
完成後便會得到 tf-coreml/yolo-c3.mlmodel 模型檔。