Register Now

Login

Lost Password

Lost your password? Please enter your email address. You will receive a link and will create a new password via email.

Captcha Click on image to update the captcha .

Add question

You must login to ask a question.

Login

Register Now

Lorem ipsum dolor sit amet, consectetur adipiscing elit.Morbi adipiscing gravdio, sit amet suscipit risus ultrices eu.Fusce viverra neque at purus laoreet consequa.Vivamus vulputate posuere nisl quis consequat.

FunClip - An open-source, locally deployable automated video clipping tool leveraging advanced speech recognition models.

## FunClip Overview FunClip is an open-source, locally deployable automated video clipping tool that uses the FunASR Paraformer series models for speech recognition. It allows users to clip videos based on selected text segments or speakers and supports advanced features like hotword customization, speaker diarization, and AI-enhanced editing. ## Models Integrated into FunClip FunClip integrates several models, including Paraformer-Large, SeACo-Paraformer, and CAM++ for speaker recognition. These models enable accurate speech recognition and advanced video editing functionalities. ## Key Features of FunClip FunClip offers features such as hotword customization for improved recognition accuracy, speaker diarization to identify different speakers, multi-segment free clipping for creating custom video clips, and AI-enhanced editing using models like Qwen and GPT series. It also supports SRT subtitles for both full videos and target segments. ## Local Deployment of FunClip To deploy FunClip locally, users need to clone the repository using `git clone https://github.com/modelscope/FunClip`, install dependencies with `pip install -r ./requirements.txt`, and optionally install ImageMagick for videos with embedded subtitles. The Gradio service can be launched using `python funclip/launch.py`. ## Online Access to FunClip FunClip can be accessed online via the Modelscope Space at [FunClip](https://modelscope.cn/studios/iic/funasr_app_clipvideo/summary) or the HuggingFace Space at [FunClip](https://huggingface.co/spaces/R1ckShi/FunClip). These platforms allow users to experience the tool without local deployment. ## Future Plans for FunClip Future plans for FunClip include support for the Whisper model for English ASR, further exploration of LLM-based AI clipping, and features like reverse period selection and removing silence periods to improve editing efficiency. ### Citation sources: - [FunClip](https://github.com/modelscope/FunClip) - Official URL Updated: 2025-03-26