Sign Language Translator

sign_language_translator

Code: https://github.com/sign-language-translator/sign-language-translator Help: https://sign-language-translator.readthedocs.io

This project is an effort to bridge the communication gap between the hearing and the hearing-impaired community using Artificial Intelligence. The goal is to provide a user friendly API to novel Sign Language Translation solutions that can easily adapt to any regional sign language.

Usage

import sign_language_translator as slt

# download dataset or models (if you need them for personal use)
# (by default, resources are auto-downloaded within the install directory)
# slt.Assets.set_root_dir("path/to/folder")  # Helps preventing duplication across environments or using cloud synced data
# slt.Assets.download(".*.json")  # downloads into resource_dir
# print(slt.Settings.FILE_TO_URL.keys())  # All downloadable resources

print("All available models:")
print(list(slt.ModelCodes))  # slt.ModelCodeGroups
# print(list(slt.TextLanguageCodes))
# print(list(slt.SignLanguageCodes))
# print(list(slt.SignFormatCodes))

# -------------------------- TRANSLATE: text to sign --------------------------

import sign_language_translator as slt

# Load text-to-sign model
# deep_t2s_model = slt.get_model("t2s-flan-T5-base-01.pt") # pytorch

# rule-based model (concatenates clips of each word)
t2s_model = slt.models.ConcatenativeSynthesis(
    text_language = "urdu", # or object of any child of slt.languages.text.text_language.TextLanguage class
    sign_language = "pakistan-sign-language", # or object of any child of slt.languages.sign.sign_language.SignLanguage class
    sign_format = "video", # or object of any child of slt.vision.sign.Sign class
)

text = "HELLO دنیا!" # HELLO treated as an acronym
sign_language_sentence = t2s_model(text)

# sign_language_sentence.show() # class: slt.vision.sign.Sign or its child
# sign_language_sentence.save(f"sentences/{text}.mp4")

# -------------------------- TRANSLATE: sign to text --------------------------

import sign_language_translator as slt

# # Load sign-to-text model (pytorch) (COMING SOON!)
# translation_model = slt.get_model(slt.ModelCodes.Gesture)
embedding_model = slt.models.MediaPipeLandmarksModel()

sign = slt.Video("video.mp4")
embedding = embedding_model.embed(sign.iter_frames())
# text = translation_model.translate(embedding)

# print(text)
sign.show()
# slt.Landmarks(embedding, connections="mediapipe-world").show()

CLI Module

Sign Language Translator (SLT) Command Line Interface

This module provides a command line interface (CLI) for the Sign Language Translator (SLT) library. It allows you to perform various operations such as translating text to sign language or vice versa, downloading resource files, completing text sequences using Language Models & embedding videos into sequences of vectors.

$ slt
Usage:
    slt [OPTIONS] COMMAND [ARGS]...

Options:
    --help  Show this message and exit.

Commands:
    assets     Assets manager to download & display Datasets & Models.
    complete   Complete a sequence using Language Models.
    translate  Translate text into sign language or vice versa.
    embed      Embed Videos Using Selected Model.