• OAI
  • SRU
  • Mapa web
  • Castellano
    • Inglés
AMERICANAE
  • Inicio
  • Presentación
  • Búsqueda
  • Directorio
  • Repositorios OAI
AMERICANAE
Gobierno de España Ministerio de Asuntos Exteriores, Unión Europea y Cooperación Agencia Española de Cooperación Internacional para el Desarrollo
AMERICANAE
  • Inicio
  • Presentación
  • Búsqueda
  • Directorio
  • Repositorios OAI
Está en:  › Datos de registro
Linked Open Data
Asr based pronunciation evaluation with automatically generated competing vocabulary and classifier fusion
Identificadores del recurso
http://hdl.handle.net/10533/197747
doi: 10.1016/j.specom.2009.01.002
wos: WOS:000265988800001
issn: 0167-6393
Procedencia
(LA Referencia)

Ficha

Título:
Asr based pronunciation evaluation with automatically generated competing vocabulary and classifier fusion
Descripción:
In this paper, the application of automatic speech recognition (ASR) technology in computer aided pronunciation training (CAPT) is addressed. A method to automatically generate the competitive lexicon, required by an ASR engine to compare the pronunciation of a target word with its correct and wrong phonetic realizations, is proposed. In order to enable the efficient deployment of CAPT applications, the generation of this competitive lexicon does not require any human assistance or a priori information of mother language dependent error rules. Moreover, a Bayes based multi-classifier fusion approach to map ASR objective confidence scores to subjective evaluations in pronunciation assessment is presented. The method proposed here to generate a competitive lexicon given a target word leads to averaged subjective-objective score correlation equal to 0.67 and 0.82 with five and two levels of pronunciation quality, respectively. Finally, multi-classifier systems (MCS) provide a promising formal framework to combine poorly correlated scores in CAPT. When applied to ASR confidence metrics, MCS can lead to an increase of 2.4% and a reduction of 10.2% in subjective-objective score correlation and classification error, respectively, with two pronunciation quality levels. (c) 2009 Elsevier B.V. All rights reserved
Fuente:
SPEECH COMMUNICATION
reponame:Artículos CONICYT
instname:CONICYT Chile
instacron:CONICYT
Idioma:
English
Relación:
instname: Conicyt
reponame: Repositorio Digital RI2.0
info:eu-repo/grantAgreement/Fondef/D05I10243
info:eu-repo/semantics/dataset/hdl.handle.net/10533/93477
Ámbito geográfico o temporal:
NLD
AMSTERDAM
Autor/Productor:
Vivanco-Torres, Roberto
Becerra-Yoma, Nestor
Wuth-Sepúlveda, Jorge
Molina-Sánchez, Carlos
Editor:
ELSEVIER SCIENCE BV
Derechos:
info:eu-repo/semantics/openAccess
Fecha:
2009
Tipo de recurso:
info:eu-repo/semantics/article
info:eu-repo/semantics/publishedVersion
About:
2020-01-27T14:04:19Zhttp://www.openarchives.org/OAI/2.0/oai_dc/Artículos CONICYT - CONICYT Chile

oai_dc

Descargar XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. <oai_dc:dc schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">

    1. <dc:title>Asr based pronunciation evaluation with automatically generated competing vocabulary and classifier fusion</dc:title>

    2. <dc:creator>Vivanco-Torres, Roberto</dc:creator>

    3. <dc:creator>Becerra-Yoma, Nestor</dc:creator>

    4. <dc:creator>Wuth-Sepúlveda, Jorge</dc:creator>

    5. <dc:creator>Molina-Sánchez, Carlos</dc:creator>

    6. <dc:description>In this paper, the application of automatic speech recognition (ASR) technology in computer aided pronunciation training (CAPT) is addressed. A method to automatically generate the competitive lexicon, required by an ASR engine to compare the pronunciation of a target word with its correct and wrong phonetic realizations, is proposed. In order to enable the efficient deployment of CAPT applications, the generation of this competitive lexicon does not require any human assistance or a priori information of mother language dependent error rules. Moreover, a Bayes based multi-classifier fusion approach to map ASR objective confidence scores to subjective evaluations in pronunciation assessment is presented. The method proposed here to generate a competitive lexicon given a target word leads to averaged subjective-objective score correlation equal to 0.67 and 0.82 with five and two levels of pronunciation quality, respectively. Finally, multi-classifier systems (MCS) provide a promising formal framework to combine poorly correlated scores in CAPT. When applied to ASR confidence metrics, MCS can lead to an increase of 2.4% and a reduction of 10.2% in subjective-objective score correlation and classification error, respectively, with two pronunciation quality levels. (c) 2009 Elsevier B.V. All rights reserved</dc:description>

    7. <dc:date>2009</dc:date>

    8. <dc:type>info:eu-repo/semantics/article</dc:type>

    9. <dc:type>info:eu-repo/semantics/publishedVersion</dc:type>

    10. <dc:identifier>http://hdl.handle.net/10533/197747</dc:identifier>

    11. <dc:identifier>doi: 10.1016/j.specom.2009.01.002</dc:identifier>

    12. <dc:identifier>wos: WOS:000265988800001</dc:identifier>

    13. <dc:identifier>issn: 0167-6393</dc:identifier>

    14. <dc:language>eng</dc:language>

    15. <dc:relation>instname: Conicyt</dc:relation>

    16. <dc:relation>reponame: Repositorio Digital RI2.0</dc:relation>

    17. <dc:relation>instname: Conicyt</dc:relation>

    18. <dc:relation>reponame: Repositorio Digital RI2.0</dc:relation>

    19. <dc:relation>info:eu-repo/grantAgreement/Fondef/D05I10243</dc:relation>

    20. <dc:relation>info:eu-repo/semantics/dataset/hdl.handle.net/10533/93477</dc:relation>

    21. <dc:rights>info:eu-repo/semantics/openAccess</dc:rights>

    22. <dc:coverage>NLD</dc:coverage>

    23. <dc:coverage>AMSTERDAM</dc:coverage>

    24. <dc:publisher>ELSEVIER SCIENCE BV</dc:publisher>

    25. <dc:source>SPEECH COMMUNICATION</dc:source>

    26. <dc:source>reponame:Artículos CONICYT</dc:source>

    27. <dc:source>instname:CONICYT Chile</dc:source>

    28. <dc:source>instacron:CONICYT</dc:source>

    29. <about>

      1. <provenance>

        1. <originDescription altered="" harvestDate="">

          1. <baseURL />
          2. <identifier />
          3. <datestamp>2020-01-27T14:04:19Z</datestamp>

          4. <metadataNamespace>http://www.openarchives.org/OAI/2.0/oai_dc/</metadataNamespace>

          5. <repositoryID />
          6. <repositoryName>Artículos CONICYT - CONICYT Chile</repositoryName>

          </originDescription>

        </provenance>

      </about>

    </oai_dc:dc>

xoai

Descargar XML

    <?xml version="1.0" encoding="UTF-8" ?>

  1. <metadata schemaLocation="http://www.lyncode.com/xoai http://www.lyncode.com/xsd/xoai.xsd">

    1. <element name="dc">

      1. <element name="title">

        1. <element name="none">

          1. <field name="value">Asr based pronunciation evaluation with automatically generated competing vocabulary and classifier fusion</field>

          </element>

        </element>

      2. <element name="creator">

        1. <element name="none">

          1. <field name="value">Vivanco-Torres, Roberto</field>

          2. <field name="value">Becerra-Yoma, Nestor</field>

          3. <field name="value">Wuth-Sepúlveda, Jorge</field>

          4. <field name="value">Molina-Sánchez, Carlos</field>

          </element>

        </element>

      3. <element name="description">

        1. <element name="none">

          1. <field name="value">In this paper, the application of automatic speech recognition (ASR) technology in computer aided pronunciation training (CAPT) is addressed. A method to automatically generate the competitive lexicon, required by an ASR engine to compare the pronunciation of a target word with its correct and wrong phonetic realizations, is proposed. In order to enable the efficient deployment of CAPT applications, the generation of this competitive lexicon does not require any human assistance or a priori information of mother language dependent error rules. Moreover, a Bayes based multi-classifier fusion approach to map ASR objective confidence scores to subjective evaluations in pronunciation assessment is presented. The method proposed here to generate a competitive lexicon given a target word leads to averaged subjective-objective score correlation equal to 0.67 and 0.82 with five and two levels of pronunciation quality, respectively. Finally, multi-classifier systems (MCS) provide a promising formal framework to combine poorly correlated scores in CAPT. When applied to ASR confidence metrics, MCS can lead to an increase of 2.4% and a reduction of 10.2% in subjective-objective score correlation and classification error, respectively, with two pronunciation quality levels. (c) 2009 Elsevier B.V. All rights reserved</field>

          </element>

        </element>

      4. <element name="publisher">

        1. <element name="none">

          1. <field name="value">ELSEVIER SCIENCE BV</field>

          </element>

        </element>

      5. <element name="date">

        1. <element name="none">

          1. <field name="value">2009</field>

          </element>

        </element>

      6. <element name="type">

        1. <element name="none">

          1. <field name="value">info:eu-repo/semantics/article</field>

          2. <field name="value">info:eu-repo/semantics/publishedVersion</field>

          </element>

        </element>

      7. <element name="identifier">

        1. <element name="none">

          1. <field name="value">http://hdl.handle.net/10533/197747</field>

          2. <field name="value">doi: 10.1016/j.specom.2009.01.002</field>

          3. <field name="value">wos: WOS:000265988800001</field>

          4. <field name="value">issn: 0167-6393</field>

          </element>

        </element>

      8. <element name="source">

        1. <element name="none">

          1. <field name="value">SPEECH COMMUNICATION</field>

          2. <field name="value">reponame:Artículos CONICYT</field>

          3. <field name="value">instname:CONICYT Chile</field>

          4. <field name="value">instacron:CONICYT</field>

          </element>

        </element>

      9. <element name="relation">

        1. <element name="none">

          1. <field name="value">instname: Conicyt</field>

          2. <field name="value">reponame: Repositorio Digital RI2.0</field>

          3. <field name="value">instname: Conicyt</field>

          4. <field name="value">reponame: Repositorio Digital RI2.0</field>

          5. <field name="value">info:eu-repo/grantAgreement/Fondef/D05I10243</field>

          6. <field name="value">info:eu-repo/semantics/dataset/hdl.handle.net/10533/93477</field>

          </element>

        </element>

      10. <element name="coverage">

        1. <element name="none">

          1. <field name="value">NLD</field>

          2. <field name="value">AMSTERDAM</field>

          </element>

        </element>

      11. <element name="rights">

        1. <element name="none">

          1. <field name="value">info:eu-repo/semantics/openAccess</field>

          </element>

        </element>

      12. <element name="language">

        1. <element name="none">

          1. <field name="value">eng</field>

          </element>

        </element>

      </element>

    2. <element name="bundles" />
    3. <element name="others">

      1. <field name="handle" />
      2. <field name="lastModifyDate">2020-01-27T14:04:19Z</field>

      </element>

    </metadata>

  • Biblioteca AECID
  • Av. Reyes Católicos, nº 4. 28040 Madrid.
  • biblio.cooperacion@aecid.es
  • (+34) 91 583 81 75 - (+34) 91 583 81 64
  • Aviso legal
  • Protección de datos
  • Accesibilidad
  • 
  • Logo Flickr
  • 
  • 
  • 