Introduce
This repo cloned from https://huggingface.co/funasr/Paraformer-large
Install funasr_onnx
pip install -U funasr_onnx
# For the users in China, you could install with the command:
# pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
Download the model
git clone https://huggingface.co/hoangus0303/paraformer-large-clone-from-funasr
Inference with runtime
Speech Recognition
Paraformer
from funasr_onnx import Paraformer
model_dir = "./paraformer-large"
model = Paraformer(model_dir, batch_size=1, quantize=True)
wav_path = ['./funasr/paraformer-large/asr_example.wav']
result = model(wav_path)
print(result)
model_dir: the model path, which containsmodel.onnx,config.yaml,am.mvnbatch_size:1(Default), the batch size duration inferencedevice_id:-1(Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)quantize:False(Default), load the model ofmodel.onnxinmodel_dir. If setTrue, load the model ofmodel_quant.onnxinmodel_dirintra_op_num_threads:4(Default), sets the number of threads used for intraop parallelism on CPU
Input: wav formt file, support formats: str, np.ndarray, List[str]
Output: List[str]: recognition result
- Downloads last month
- 1