In the previous posts we analyzed a classification system, which can effectively be used as a submodule in a larger configuration. In this post we finally focus on the drawing application itself. As already mentioned, I only utilize the static gestures classification system in this implementation, which is, as one can observe from the classification results on the test dataset, quite accurate. The flow diagram of the created application can be found here.  To show that we can exploit the retrieved skeleton endpoint, I also make use of the Protractor Algorithm, also known as $N . This algorithm, in conjunction with a gestures-to-drawing-actions mapping, allows the recognition of various predefined shapes, which are drawn by the user. By enabling this feature,  we can deduce in theory the user’s intentions and offer higher quality drawing potential, by replacing his strokes with the corresponding shapes. Unfortunately, I was not able to achieve the wanted result most of the times that I tried, due to the fact that I borrowed this algorithm from the Kivy module, where it is still under active development and might need extra parametrization or more predefined templates, to take the drawing noise under consideration. In addition to that feature, I also make use of the distance of the hand from the surface and the sensor, to allow the user to dynamically change the drawing size, implying the possibility of harnessing the depth dimension capacities, in order to achieve more realistic results. In the videos below one can see the application in action. (I am a really bad painter….)

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s