Please use this identifier to cite or link to this item:
http://103.99.128.19:8080/xmlui/handle/123456789/319
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Afroze, Sadia | - |
dc.contributor.author | Hoque, Mohammed Moshiul | - |
dc.date.accessioned | 2021-10-25T05:59:00Z | - |
dc.date.available | 2021-10-25T05:59:00Z | - |
dc.date.issued | 2019-02-07 | - |
dc.identifier.isbn | 978-1-5386-9111-3 | - |
dc.identifier.uri | http://103.99.128.19:8080/xmlui/handle/123456789/319 | - |
dc.description.abstract | Human talking mode detection is an important issue in human-computer interaction. In this work, we propose a method for detecting human talking and non talking mode detection based on supervised machine learning approach. Visual lip information of human is considered as an important clue. Our goal is to develop a method for human talking and non talking mode detection in real time using supervised classification algorithm. We tested our experiment with a single speaker task and compared the results with the previous method. The results show that our approach can obtain a 98.00% accuracy and a fast executed time. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Faculty of Electrical and Computer Engineering, CUET | en_US |
dc.relation.ispartofseries | ECCE; | - |
dc.subject | Computer vision | en_US |
dc.subject | feature extraction | en_US |
dc.subject | face detection | en_US |
dc.subject | pattern recognition | en_US |
dc.subject | evaluation | en_US |
dc.title | Talking vs Non-Talking: A Vision Based Approach to Detect Human Speaking Mode | en_US |
dc.title.alternative | International Conference on Electrical, Computer and Communication Engineering (ECCE-2019) | en_US |
dc.type | Article | en_US |
Appears in Collections: | proceedings in CSE |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Talking vs Non-Talking A Vision Based Approach.pdf | 209.82 kB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.