差别

这里会显示出您选择的修订版和当前版本之间的差别。

到此差别页面的链接

手势识别 [2016/06/27 09:30]
gongyu 创建
手势识别 [2016/06/27 09:30]
gongyu
行 9: 行 9:
 Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and also increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice. Such implements can enable a new range of hardware not requiring monitors. This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor.(see Pen computing) Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and also increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice. Such implements can enable a new range of hardware not requiring monitors. This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor.(see Pen computing)
  
-Gesture types[edit]+====Gesture types====
 In computer interfaces, two types of gestures are distinguished:​[9] We consider online gestures, which can also be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are usually processed after the interaction is finished; e. g. a circle is drawn to activate a context menu. In computer interfaces, two types of gestures are distinguished:​[9] We consider online gestures, which can also be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are usually processed after the interaction is finished; e. g. a circle is drawn to activate a context menu.
  
 Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu. Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu.
 Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object. Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object.
-Input devices[edit]+ 
 +====Input devices====
 The ability to track a person'​s movements and determine what gestures they may be performing can be achieved through various tools. Although there is a large amount of research done in image/video based gesture recognition,​ there is some variation within the tools and environments used between implementations. The ability to track a person'​s movements and determine what gestures they may be performing can be achieved through various tools. Although there is a large amount of research done in image/video based gesture recognition,​ there is some variation within the tools and environments used between implementations.
  
行 23: 行 24:
 Single camera. A standard 2D camera can be used for gesture recognition where the resources/​environment would not be convenient for other forms of image-based recognition. Earlier it was thought that single camera may not be as effective as stereo or depth aware cameras, but some companies are challenging this theory. Software-based gesture recognition technology using a standard 2D camera that can detect robust hand gestures. Single camera. A standard 2D camera can be used for gesture recognition where the resources/​environment would not be convenient for other forms of image-based recognition. Earlier it was thought that single camera may not be as effective as stereo or depth aware cameras, but some companies are challenging this theory. Software-based gesture recognition technology using a standard 2D camera that can detect robust hand gestures.
 Radar. See Project Soli revealed at Google I/O 2015. starting at 13:30, Google I/O 2015 – A little badass. Beautiful. Tech and human. Work and love. ATAP. - YouTube, and a short introduction video, Welcome to Project Soli – YouTube Radar. See Project Soli revealed at Google I/O 2015. starting at 13:30, Google I/O 2015 – A little badass. Beautiful. Tech and human. Work and love. ATAP. - YouTube, and a short introduction video, Welcome to Project Soli – YouTube
-Algorithms[edit]+ 
 +====Algorithms====
  
 Different ways of tracking and analyzing gestures exist, and some basic layout is given is in the diagram above. For example, volumetric models convey the necessary information required for an elaborate analysis, however they prove to be very intensive in terms of computational power and require further technological developments in order to be implemented for real-time analysis. On the other hand, appearance-based models are easier to process but usually lack the generality required for Human-Computer Interaction. Different ways of tracking and analyzing gestures exist, and some basic layout is given is in the diagram above. For example, volumetric models convey the necessary information required for an elaborate analysis, however they prove to be very intensive in terms of computational power and require further technological developments in order to be implemented for real-time analysis. On the other hand, appearance-based models are easier to process but usually lack the generality required for Human-Computer Interaction.